Jan 29 11:21:00 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 11:21:00 crc restorecon[4755]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:00 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:01 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 11:21:02 crc restorecon[4755]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:21:04 crc kubenswrapper[4766]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.338016 4766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353154 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353197 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353204 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353209 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353215 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353220 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353225 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353229 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353236 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353243 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353249 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353254 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353258 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353262 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353268 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353273 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353277 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353283 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353288 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353293 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353301 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353308 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353315 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353321 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353327 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353333 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353339 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353345 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353350 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353355 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353359 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353363 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353368 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353373 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353380 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353385 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353389 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353394 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353400 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353405 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353409 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353413 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353420 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353441 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353446 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353451 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353459 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353464 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353469 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353475 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353480 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353485 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353490 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353495 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353500 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353504 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353507 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353511 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353515 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353520 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353525 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353529 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353533 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353537 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353542 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353546 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353549 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353554 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353558 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353564 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.353569 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353682 4766 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353697 4766 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353708 4766 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353715 4766 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353723 4766 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353728 4766 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353736 4766 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353743 4766 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353748 4766 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353753 4766 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353759 4766 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353766 4766 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353771 4766 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353776 4766 flags.go:64] FLAG: --cgroup-root="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353783 4766 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353787 4766 flags.go:64] FLAG: --client-ca-file="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353791 4766 flags.go:64] FLAG: --cloud-config="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353794 4766 flags.go:64] FLAG: --cloud-provider="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353798 4766 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353804 4766 flags.go:64] FLAG: --cluster-domain="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353808 4766 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353812 4766 flags.go:64] FLAG: --config-dir="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353816 4766 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353821 4766 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353827 4766 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353831 4766 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353835 4766 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353839 4766 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353844 4766 flags.go:64] FLAG: --contention-profiling="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353848 4766 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353852 4766 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353857 4766 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353861 4766 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353867 4766 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353872 4766 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353876 4766 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353880 4766 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353884 4766 flags.go:64] FLAG: --enable-server="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353888 4766 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353895 4766 flags.go:64] FLAG: --event-burst="100" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353899 4766 flags.go:64] FLAG: --event-qps="50" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353903 4766 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353907 4766 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353911 4766 flags.go:64] FLAG: --eviction-hard="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353917 4766 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353921 4766 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353925 4766 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353930 4766 flags.go:64] FLAG: --eviction-soft="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353934 4766 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353938 4766 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353942 4766 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353946 4766 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353950 4766 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353954 4766 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353958 4766 flags.go:64] FLAG: --feature-gates="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353963 4766 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353967 4766 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353972 4766 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353976 4766 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353980 4766 flags.go:64] FLAG: --healthz-port="10248" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353984 4766 flags.go:64] FLAG: --help="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353988 4766 flags.go:64] FLAG: --hostname-override="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353992 4766 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.353997 4766 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354001 4766 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354005 4766 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354010 4766 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354014 4766 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354018 4766 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354022 4766 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354026 4766 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354030 4766 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354035 4766 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354039 4766 flags.go:64] FLAG: --kube-reserved="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354043 4766 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354046 4766 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354051 4766 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354055 4766 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354059 4766 flags.go:64] FLAG: --lock-file="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354063 4766 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354067 4766 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354072 4766 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354079 4766 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354084 4766 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354088 4766 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354093 4766 flags.go:64] FLAG: --logging-format="text" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354098 4766 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354102 4766 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354106 4766 flags.go:64] FLAG: --manifest-url="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354110 4766 flags.go:64] FLAG: --manifest-url-header="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354122 4766 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354127 4766 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354132 4766 flags.go:64] FLAG: --max-pods="110" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354137 4766 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354141 4766 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354145 4766 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354150 4766 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354154 4766 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354158 4766 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354163 4766 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354175 4766 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354179 4766 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354184 4766 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354188 4766 flags.go:64] FLAG: --pod-cidr="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354192 4766 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354198 4766 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354202 4766 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354207 4766 flags.go:64] FLAG: --pods-per-core="0" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354211 4766 flags.go:64] FLAG: --port="10250" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354215 4766 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354219 4766 flags.go:64] FLAG: --provider-id="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354224 4766 flags.go:64] FLAG: --qos-reserved="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354228 4766 flags.go:64] FLAG: --read-only-port="10255" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354232 4766 flags.go:64] FLAG: --register-node="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354236 4766 flags.go:64] FLAG: --register-schedulable="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354240 4766 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354248 4766 flags.go:64] FLAG: --registry-burst="10" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354252 4766 flags.go:64] FLAG: --registry-qps="5" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354256 4766 flags.go:64] FLAG: --reserved-cpus="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354260 4766 flags.go:64] FLAG: --reserved-memory="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354266 4766 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354270 4766 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354274 4766 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354279 4766 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354283 4766 flags.go:64] FLAG: --runonce="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354286 4766 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354291 4766 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354295 4766 flags.go:64] FLAG: --seccomp-default="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354299 4766 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354303 4766 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354307 4766 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354311 4766 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354315 4766 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354319 4766 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354323 4766 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354327 4766 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354331 4766 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354335 4766 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354339 4766 flags.go:64] FLAG: --system-cgroups="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354343 4766 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354349 4766 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354353 4766 flags.go:64] FLAG: --tls-cert-file="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354357 4766 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354367 4766 flags.go:64] FLAG: --tls-min-version="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354371 4766 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354375 4766 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354379 4766 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354383 4766 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354387 4766 flags.go:64] FLAG: --v="2" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354393 4766 flags.go:64] FLAG: --version="false" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354401 4766 flags.go:64] FLAG: --vmodule="" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354406 4766 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354414 4766 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354542 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354548 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354555 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354560 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354565 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354570 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354575 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354579 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354583 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354587 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354591 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354595 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354598 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354602 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354605 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354609 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354613 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354617 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354621 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354626 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354630 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354634 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354640 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354644 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354648 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354652 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354656 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354660 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354663 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354667 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354671 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354674 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354678 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354683 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354686 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354690 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354693 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354696 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354701 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354704 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354707 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354711 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354714 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354718 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354721 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354726 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354730 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354734 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354738 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354742 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354745 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354749 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354753 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354756 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354762 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354765 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354769 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354772 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354776 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354779 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354783 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354786 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354789 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354794 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354799 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354804 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354808 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354811 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354815 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354818 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.354822 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.354834 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.384066 4766 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.384144 4766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384220 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384231 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384238 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384244 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384248 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384253 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384257 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384261 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384264 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384268 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384272 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384276 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384280 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384284 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384288 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384292 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384296 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384300 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384305 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384311 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384316 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384321 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384326 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384331 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384335 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384339 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384344 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384349 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384354 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384362 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384367 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384371 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384377 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384383 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384388 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384393 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384398 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384403 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384407 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384412 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384416 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384438 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384443 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384447 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384453 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384458 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384462 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384467 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384471 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384477 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384482 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384488 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384493 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384498 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384503 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384509 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384514 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384518 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384523 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384527 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384532 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384537 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384541 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384546 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384553 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384561 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384566 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384571 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384578 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384584 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384589 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.384598 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384744 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384753 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384758 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384764 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384769 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384774 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384779 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384784 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384790 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384795 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384801 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384808 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384814 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384820 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384825 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384829 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384834 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384838 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384843 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384847 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384852 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384856 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384861 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384865 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384870 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384877 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384882 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384887 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384892 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384899 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384904 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384909 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384915 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384919 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384925 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384929 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384934 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384939 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384944 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384949 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384954 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384958 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384962 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384967 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384971 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384976 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384980 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384985 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384989 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384994 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.384999 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385003 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385008 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385013 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385017 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385022 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385027 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385033 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385037 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385044 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385051 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385057 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385062 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385068 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385072 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385079 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385083 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385088 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385093 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385097 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.385103 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.385112 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.388113 4766 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.403104 4766 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.403212 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.423827 4766 server.go:997] "Starting client certificate rotation" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.423889 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.424085 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-26 06:40:22.062963281 +0000 UTC Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.424205 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.492668 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.511620 4766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.513585 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.661087 4766 log.go:25] "Validated CRI v1 runtime API" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.801569 4766 log.go:25] "Validated CRI v1 image API" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.806995 4766 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.813044 4766 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-11-15-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.813120 4766 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.865797 4766 manager.go:217] Machine: {Timestamp:2026-01-29 11:21:04.833494329 +0000 UTC m=+1.945887390 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e1cf5141-f02b-4b4b-ad4c-52cf74069ee2 BootID:63ba66e3-115c-4d10-9153-6b9869c521f9 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d5:5b:c7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d5:5b:c7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7f:ef:5f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6f:22:7b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e6:77:eb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a7:89:56 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:42:fd:7d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:44:f5:ac:58:b0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1e:6f:db:2b:f7:86 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.866202 4766 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.866506 4766 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.867014 4766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.867211 4766 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.867263 4766 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.867524 4766 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.867535 4766 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.876005 4766 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.876037 4766 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.879743 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.879856 4766 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.888641 4766 kubelet.go:418] "Attempting to sync node with API server" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.888694 4766 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.888718 4766 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.888733 4766 kubelet.go:324] "Adding apiserver pod source" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.888751 4766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.902125 4766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.911100 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.911174 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.911207 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.911235 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.912154 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.916697 4766 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927076 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927124 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927133 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927140 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927152 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927159 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927166 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927178 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927188 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927196 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927207 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.927214 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.928905 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.929479 4766 server.go:1280] "Started kubelet" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.930765 4766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.930750 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.930772 4766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.931976 4766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.932484 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.932530 4766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.932572 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:50:33.375067078 +0000 UTC Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.933207 4766 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.933250 4766 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:21:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.936248 4766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:21:04 crc kubenswrapper[4766]: W0129 11:21:04.937655 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.937037 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.937751 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.945579 4766 factory.go:55] Registering systemd factory Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.945625 4766 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:21:04 crc kubenswrapper[4766]: E0129 11:21:04.946314 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.946995 4766 factory.go:153] Registering CRI-O factory Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.947032 4766 factory.go:221] Registration of the crio container factory successfully Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.947126 4766 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.947163 4766 factory.go:103] Registering Raw factory Jan 29 11:21:04 crc kubenswrapper[4766]: I0129 11:21:04.947189 4766 manager.go:1196] Started watching for new ooms in manager Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.034731 4766 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.034785 4766 manager.go:319] Starting recovery of all containers Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.038076 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.035680 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f2fbc6e385fd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,LastTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.056697 4766 manager.go:324] Recovery completed Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.060088 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.060152 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.069694 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070240 4766 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070324 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070344 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070359 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070373 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070384 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070397 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070454 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070466 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070478 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070489 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070501 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070515 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070571 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070585 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070600 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070615 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070626 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070638 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070651 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070705 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070720 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070731 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070744 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070755 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070804 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070821 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070835 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070878 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070892 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070906 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.070997 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071034 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071103 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071116 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071148 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071164 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071176 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071188 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071201 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071214 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071227 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071240 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071253 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071276 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071290 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071305 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071327 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071342 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071354 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071367 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071389 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071415 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071442 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071479 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071497 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071515 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071627 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071647 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.071665 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072027 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072054 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072078 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072110 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072147 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072079 4766 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072162 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072169 4766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072177 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072190 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072191 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072222 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072240 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072253 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072265 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072278 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072291 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072311 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072323 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072336 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072348 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072361 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072373 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072389 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072402 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072415 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072451 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072466 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072479 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072492 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072504 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072516 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072528 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072541 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072553 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072565 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072578 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072591 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072603 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072616 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072628 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072639 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072650 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072661 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072672 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072684 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072703 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072719 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072734 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072748 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072760 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072776 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072791 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072805 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072819 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072833 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072846 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072859 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072870 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072882 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072893 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072905 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072919 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072931 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072942 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072954 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072967 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072978 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.072991 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073002 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073013 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073026 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073039 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073051 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073062 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073076 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073309 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073334 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073370 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073541 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073587 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073635 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073654 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073670 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073702 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073719 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073743 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073759 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073776 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073801 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073816 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073839 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.073858 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074365 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074391 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074457 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074482 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074502 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074532 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074551 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074587 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074609 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074626 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074652 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074669 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074694 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074712 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074764 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074797 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074819 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074849 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074869 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074888 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074914 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074935 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074957 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.074980 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075000 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075028 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075048 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075076 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075093 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075112 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075136 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075156 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075188 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075209 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075226 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075252 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075270 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075297 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075314 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075332 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075354 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075372 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075393 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075413 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075449 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075474 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075491 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075516 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075534 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075549 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075585 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075604 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075624 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075646 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075663 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075687 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075702 4766 reconstruct.go:97] "Volume reconstruction finished" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.075716 4766 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.139183 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.148228 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.221386 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.222977 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.223067 4766 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.223166 4766 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.223555 4766 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:21:05 crc kubenswrapper[4766]: W0129 11:21:05.224158 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.224298 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.240139 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.324533 4766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.340374 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.440861 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.468045 4766 policy_none.go:49] "None policy: Start" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.471355 4766 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.471516 4766 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.525096 4766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.541836 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.550833 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.642094 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.742257 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.812767 4766 manager.go:334] "Starting Device Plugin manager" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.813053 4766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.813091 4766 server.go:79] "Starting device plugin registration server" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.813859 4766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.813886 4766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.814158 4766 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.814365 4766 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.814378 4766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.820671 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.915090 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.916656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.916692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.916703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.916728 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:05 crc kubenswrapper[4766]: E0129 11:21:05.917313 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.925290 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.925463 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927413 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.927664 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928786 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.928864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.929033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.929134 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.930731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.930771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.930784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.931382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.931411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.931448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.931583 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.931779 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932100 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932152 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932794 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932758 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:47:29.055358652 +0000 UTC Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.932945 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933358 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933964 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.933988 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.934826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.995738 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.995831 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.995878 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.995928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.995960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996034 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996084 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996125 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996152 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996175 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996210 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996262 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:05 crc kubenswrapper[4766]: I0129 11:21:05.996319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097573 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097948 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097976 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.097992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098002 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098021 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098057 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098078 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098270 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098296 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098312 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098333 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098346 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.098376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.118372 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.119841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.119884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.119894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.119933 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.120504 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.152715 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.152801 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.239134 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.239209 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.253912 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.253987 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.261743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.271593 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.294918 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.308988 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.316398 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.352607 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.465655 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.465729 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.486063 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3073fc24494a08e9d52b31bc5be8ac77053b4ab99e8e6c4f635a9eb9d203acd0 WatchSource:0}: Error finding container 3073fc24494a08e9d52b31bc5be8ac77053b4ab99e8e6c4f635a9eb9d203acd0: Status 404 returned error can't find the container with id 3073fc24494a08e9d52b31bc5be8ac77053b4ab99e8e6c4f635a9eb9d203acd0 Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.491974 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-454187a0e7df66804f7faf309984ae1f60040442f57866fb2e151904dbd132a2 WatchSource:0}: Error finding container 454187a0e7df66804f7faf309984ae1f60040442f57866fb2e151904dbd132a2: Status 404 returned error can't find the container with id 454187a0e7df66804f7faf309984ae1f60040442f57866fb2e151904dbd132a2 Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.495279 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-27daba1e82a4f041ccf7aaabe144b9c59629bbe286c824aa259007c5e9ede625 WatchSource:0}: Error finding container 27daba1e82a4f041ccf7aaabe144b9c59629bbe286c824aa259007c5e9ede625: Status 404 returned error can't find the container with id 27daba1e82a4f041ccf7aaabe144b9c59629bbe286c824aa259007c5e9ede625 Jan 29 11:21:06 crc kubenswrapper[4766]: W0129 11:21:06.495885 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0ea54b9a00bd330967d3081dc11b66aa2ded4d4d7374c3e795f690bb23b6db45 WatchSource:0}: Error finding container 0ea54b9a00bd330967d3081dc11b66aa2ded4d4d7374c3e795f690bb23b6db45: Status 404 returned error can't find the container with id 0ea54b9a00bd330967d3081dc11b66aa2ded4d4d7374c3e795f690bb23b6db45 Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.520680 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.523870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.523923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.523938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.523983 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.524858 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.574869 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 11:21:06 crc kubenswrapper[4766]: E0129 11:21:06.576170 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.931675 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:06 crc kubenswrapper[4766]: I0129 11:21:06.933874 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:09:03.04057107 +0000 UTC Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.243204 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"27daba1e82a4f041ccf7aaabe144b9c59629bbe286c824aa259007c5e9ede625"} Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.244812 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"454187a0e7df66804f7faf309984ae1f60040442f57866fb2e151904dbd132a2"} Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.245726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3073fc24494a08e9d52b31bc5be8ac77053b4ab99e8e6c4f635a9eb9d203acd0"} Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.246634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"40e6b7e07c9db1ddee4446cdc4ba8e06d9a2f18b392228881f67156763aa3961"} Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.247461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ea54b9a00bd330967d3081dc11b66aa2ded4d4d7374c3e795f690bb23b6db45"} Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.326797 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.328302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.328340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.328354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.328376 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:07 crc kubenswrapper[4766]: E0129 11:21:07.328655 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.932403 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:07 crc kubenswrapper[4766]: I0129 11:21:07.934465 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:38:37.636359906 +0000 UTC Jan 29 11:21:07 crc kubenswrapper[4766]: E0129 11:21:07.954002 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 29 11:21:08 crc kubenswrapper[4766]: W0129 11:21:08.844093 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:08 crc kubenswrapper[4766]: E0129 11:21:08.844503 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:08 crc kubenswrapper[4766]: W0129 11:21:08.862539 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:08 crc kubenswrapper[4766]: E0129 11:21:08.862685 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.929264 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.930835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.930965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.930985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.931071 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.931735 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:08 crc kubenswrapper[4766]: E0129 11:21:08.932130 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:08 crc kubenswrapper[4766]: I0129 11:21:08.935181 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:11:44.472018486 +0000 UTC Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.253197 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.253273 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.253272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f"} Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.254615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.254650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.254661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.255307 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.255363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749"} Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.255450 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.259722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.259764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.259779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.261644 4766 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.261752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c"} Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.261779 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.262966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.263004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.263027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.263579 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a"} Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.265167 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.265252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca"} Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.265375 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.266643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.266678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.266690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.268575 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.269652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.269686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.269695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:09 crc kubenswrapper[4766]: W0129 11:21:09.320403 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:09 crc kubenswrapper[4766]: E0129 11:21:09.320530 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:09 crc kubenswrapper[4766]: W0129 11:21:09.385809 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:09 crc kubenswrapper[4766]: E0129 11:21:09.385889 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.931659 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:09 crc kubenswrapper[4766]: I0129 11:21:09.935855 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:19:06.229348918 +0000 UTC Jan 29 11:21:10 crc kubenswrapper[4766]: E0129 11:21:10.381637 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f2fbc6e385fd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,LastTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:21:10 crc kubenswrapper[4766]: I0129 11:21:10.614632 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 11:21:10 crc kubenswrapper[4766]: E0129 11:21:10.615812 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:10 crc kubenswrapper[4766]: I0129 11:21:10.932123 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:10 crc kubenswrapper[4766]: I0129 11:21:10.936263 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:51:24.879167642 +0000 UTC Jan 29 11:21:11 crc kubenswrapper[4766]: E0129 11:21:11.155056 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.273739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721"} Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.275592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82"} Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.277138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848"} Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.278716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"509f5e01bea7149b8c69f416c9d88c388d3db3e6300254e1d58b167629183dfc"} Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.280323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8"} Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.931786 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:11 crc kubenswrapper[4766]: I0129 11:21:11.936815 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:54:39.805647901 +0000 UTC Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.132308 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.133752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.133804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.133821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.133849 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:12 crc kubenswrapper[4766]: E0129 11:21:12.134374 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:12 crc kubenswrapper[4766]: W0129 11:21:12.615781 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:12 crc kubenswrapper[4766]: E0129 11:21:12.615952 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.932848 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:12 crc kubenswrapper[4766]: I0129 11:21:12.936971 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:17:24.675307299 +0000 UTC Jan 29 11:21:12 crc kubenswrapper[4766]: W0129 11:21:12.940433 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:12 crc kubenswrapper[4766]: E0129 11:21:12.940524 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:13 crc kubenswrapper[4766]: W0129 11:21:13.110910 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:13 crc kubenswrapper[4766]: E0129 11:21:13.111000 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.288834 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82" exitCode=0 Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.288964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82"} Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.289008 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.290191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.290241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.290260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.932170 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:13 crc kubenswrapper[4766]: I0129 11:21:13.937296 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:06:36.280871048 +0000 UTC Jan 29 11:21:14 crc kubenswrapper[4766]: W0129 11:21:14.910996 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:14 crc kubenswrapper[4766]: E0129 11:21:14.911133 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:14 crc kubenswrapper[4766]: I0129 11:21:14.931809 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:14 crc kubenswrapper[4766]: I0129 11:21:14.938293 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:24:29.37974684 +0000 UTC Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.293381 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.294571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.294624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.294637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:15 crc kubenswrapper[4766]: E0129 11:21:15.820828 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.932031 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:15 crc kubenswrapper[4766]: I0129 11:21:15.938816 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:04:25.884281023 +0000 UTC Jan 29 11:21:16 crc kubenswrapper[4766]: I0129 11:21:16.932097 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:16 crc kubenswrapper[4766]: I0129 11:21:16.939358 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:49:46.326694073 +0000 UTC Jan 29 11:21:17 crc kubenswrapper[4766]: I0129 11:21:17.300632 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d545b1c015854aae81ddf385c118593789397a7f62077baaf1261ddda6b81fad"} Jan 29 11:21:17 crc kubenswrapper[4766]: E0129 11:21:17.556272 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="7s" Jan 29 11:21:17 crc kubenswrapper[4766]: I0129 11:21:17.931772 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:17 crc kubenswrapper[4766]: I0129 11:21:17.940215 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:09:15.894582047 +0000 UTC Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.305706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422"} Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.308217 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806"} Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.310302 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b"} Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.310503 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.311474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.311503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.311512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.535319 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.536857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.536896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.536908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.536940 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:18 crc kubenswrapper[4766]: E0129 11:21:18.537510 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.932190 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:18 crc kubenswrapper[4766]: I0129 11:21:18.940722 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:44:31.887149879 +0000 UTC Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.271387 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 11:21:19 crc kubenswrapper[4766]: E0129 11:21:19.272584 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.314512 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48d4b9058cea53335860f66fdf06820202660275143325c3dc5b813df1d60818"} Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.316852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f"} Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.319729 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b" exitCode=0 Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.319765 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b"} Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.319869 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.320883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.320910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.320919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.931871 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:19 crc kubenswrapper[4766]: I0129 11:21:19.941391 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:22:24.006450665 +0000 UTC Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.322017 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.323271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.323310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.323322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:20 crc kubenswrapper[4766]: E0129 11:21:20.383115 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f2fbc6e385fd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,LastTimestamp:2026-01-29 11:21:04.929439702 +0000 UTC m=+2.041832733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.932727 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:20 crc kubenswrapper[4766]: I0129 11:21:20.941941 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:06:07.748862129 +0000 UTC Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.326246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9057e7dacac5ef2dd405ea124359e5bc143025ab45ad29f20d5f6c16da236b2"} Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.326303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"be35ac9ff26d4e33294cd586455634fa2e2f070b3b9c39f1b02cc683e2fdc7eb"} Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.328218 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.328500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c"} Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.328945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.328971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.328982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.330575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e"} Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.932339 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:21 crc kubenswrapper[4766]: I0129 11:21:21.942670 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:21:26.630099662 +0000 UTC Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.336110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66e32fa2f2375acfefa6843a62686817c467b1081d3cbc67f8c4a2a8808e25b0"} Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.336195 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.336273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.337083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.337118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.337128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.931610 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:22 crc kubenswrapper[4766]: I0129 11:21:22.942952 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:11:34.508996817 +0000 UTC Jan 29 11:21:23 crc kubenswrapper[4766]: W0129 11:21:23.326333 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:23 crc kubenswrapper[4766]: E0129 11:21:23.326463 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.343047 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3ab2524a59d6f3504907bae7dae0f390e8326b9490441dbee277bc0a44d8c3d3"} Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.343101 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6d02adb96cd77bb10d186e4a9d47ea85ec282480dd0cfd5ef108274fc6b74d7"} Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.343155 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.343194 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.344293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.372360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.372569 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.373855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.373909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.373921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.932711 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:23 crc kubenswrapper[4766]: I0129 11:21:23.943437 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:48:03.157316448 +0000 UTC Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.346325 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.347890 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66e32fa2f2375acfefa6843a62686817c467b1081d3cbc67f8c4a2a8808e25b0" exitCode=255 Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.347960 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"66e32fa2f2375acfefa6843a62686817c467b1081d3cbc67f8c4a2a8808e25b0"} Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.348113 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.348962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.349023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.349037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.349725 4766 scope.go:117] "RemoveContainer" containerID="66e32fa2f2375acfefa6843a62686817c467b1081d3cbc67f8c4a2a8808e25b0" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.355181 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"63bd3ed7fe3334bb28ec0880e5a9afc307d112e4a801744891faf2c28710a533"} Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.355300 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.356289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.356320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.356330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:24 crc kubenswrapper[4766]: E0129 11:21:24.558531 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="7s" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.728890 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.931887 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:24 crc kubenswrapper[4766]: I0129 11:21:24.944131 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:38:50.497471451 +0000 UTC Jan 29 11:21:24 crc kubenswrapper[4766]: W0129 11:21:24.956853 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 29 11:21:24 crc kubenswrapper[4766]: E0129 11:21:24.956968 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.368734 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.370156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231"} Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.370229 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.370260 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.370229 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.371767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.537967 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.539280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.539319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.539331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.539355 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:25 crc kubenswrapper[4766]: E0129 11:21:25.820941 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 11:21:25 crc kubenswrapper[4766]: I0129 11:21:25.944865 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:29:30.140207358 +0000 UTC Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.372605 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.372690 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.373511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.373536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.373545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.600262 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.751338 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.751656 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.752893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.752966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.752977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.792903 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.793135 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.794377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.794439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.794457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:26 crc kubenswrapper[4766]: I0129 11:21:26.945912 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:39:09.738500215 +0000 UTC Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.374370 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.375209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.375249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.375267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.802685 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:27 crc kubenswrapper[4766]: I0129 11:21:27.946991 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:17:46.117047592 +0000 UTC Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.120899 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.121134 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.122353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.122393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.122403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.377278 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.378336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.378383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.378392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.619901 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.620122 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.621542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.621586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.621599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.626356 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:28 crc kubenswrapper[4766]: I0129 11:21:28.947165 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:17:22.007629852 +0000 UTC Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.342965 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.343160 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.344284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.344310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.344318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.379597 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.379920 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.387886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.387932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.387947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.388272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.388335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.388347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.389211 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:29 crc kubenswrapper[4766]: I0129 11:21:29.948018 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:43:24.939436522 +0000 UTC Jan 29 11:21:30 crc kubenswrapper[4766]: I0129 11:21:30.382604 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:30 crc kubenswrapper[4766]: I0129 11:21:30.383870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:30 crc kubenswrapper[4766]: I0129 11:21:30.383895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:30 crc kubenswrapper[4766]: I0129 11:21:30.383906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:30 crc kubenswrapper[4766]: I0129 11:21:30.949008 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:40:01.57733596 +0000 UTC Jan 29 11:21:31 crc kubenswrapper[4766]: I0129 11:21:31.121391 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:21:31 crc kubenswrapper[4766]: I0129 11:21:31.121529 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:31 crc kubenswrapper[4766]: I0129 11:21:31.949483 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:53:54.075205799 +0000 UTC Jan 29 11:21:32 crc kubenswrapper[4766]: I0129 11:21:32.950324 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:34:47.147336569 +0000 UTC Jan 29 11:21:33 crc kubenswrapper[4766]: I0129 11:21:33.189000 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 11:21:33 crc kubenswrapper[4766]: I0129 11:21:33.189071 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 11:21:33 crc kubenswrapper[4766]: I0129 11:21:33.194869 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 11:21:33 crc kubenswrapper[4766]: I0129 11:21:33.194950 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 11:21:33 crc kubenswrapper[4766]: I0129 11:21:33.951580 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:56:45.036428604 +0000 UTC Jan 29 11:21:34 crc kubenswrapper[4766]: I0129 11:21:34.729291 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 11:21:34 crc kubenswrapper[4766]: I0129 11:21:34.729377 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 11:21:34 crc kubenswrapper[4766]: I0129 11:21:34.952548 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:02:35.557926324 +0000 UTC Jan 29 11:21:35 crc kubenswrapper[4766]: E0129 11:21:35.821444 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 11:21:35 crc kubenswrapper[4766]: I0129 11:21:35.953560 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:26:20.178235923 +0000 UTC Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.555836 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.572147 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.606457 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.606750 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.607554 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.607631 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.608985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.609066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.609082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.612841 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.807250 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.807755 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.809169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.809221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.809235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.830869 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 11:21:36 crc kubenswrapper[4766]: I0129 11:21:36.954486 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:51:53.019557133 +0000 UTC Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.000029 4766 csr.go:261] certificate signing request csr-lqbbm is approved, waiting to be issued Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.015195 4766 csr.go:257] certificate signing request csr-lqbbm is issued Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.400668 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401080 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401219 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401314 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.401784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.402552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.402667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.402761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:37 crc kubenswrapper[4766]: I0129 11:21:37.955492 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:36:34.57927919 +0000 UTC Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.016387 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 11:16:37 +0000 UTC, rotation deadline is 2026-11-19 00:26:13.307546949 +0000 UTC Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.016472 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7045h4m35.291079763s for next certificate rotation Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.182492 4766 trace.go:236] Trace[931073780]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 11:21:27.018) (total time: 11164ms): Jan 29 11:21:38 crc kubenswrapper[4766]: Trace[931073780]: ---"Objects listed" error: 11164ms (11:21:38.182) Jan 29 11:21:38 crc kubenswrapper[4766]: Trace[931073780]: [11.164419917s] [11.164419917s] END Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.182522 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 11:21:38 crc kubenswrapper[4766]: E0129 11:21:38.184203 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.198547 4766 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.202520 4766 trace.go:236] Trace[1313076669]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 11:21:25.354) (total time: 12848ms): Jan 29 11:21:38 crc kubenswrapper[4766]: Trace[1313076669]: ---"Objects listed" error: 12848ms (11:21:38.202) Jan 29 11:21:38 crc kubenswrapper[4766]: Trace[1313076669]: [12.848239486s] [12.848239486s] END Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.202556 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.334359 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.334573 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.335893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.335936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.335945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.338649 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.403847 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.404984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.405033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.405051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:38 crc kubenswrapper[4766]: I0129 11:21:38.956985 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:58:55.689074154 +0000 UTC Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.408622 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.409208 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.410879 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231" exitCode=255 Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.410942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231"} Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.411025 4766 scope.go:117] "RemoveContainer" containerID="66e32fa2f2375acfefa6843a62686817c467b1081d3cbc67f8c4a2a8808e25b0" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.411194 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.412272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.412309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.412323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.413018 4766 scope.go:117] "RemoveContainer" containerID="0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231" Jan 29 11:21:39 crc kubenswrapper[4766]: E0129 11:21:39.413213 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 11:21:39 crc kubenswrapper[4766]: I0129 11:21:39.957889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:09:48.703130658 +0000 UTC Jan 29 11:21:40 crc kubenswrapper[4766]: I0129 11:21:40.736786 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:21:40 crc kubenswrapper[4766]: I0129 11:21:40.958710 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:02:03.046502633 +0000 UTC Jan 29 11:21:41 crc kubenswrapper[4766]: I0129 11:21:41.958901 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:50:13.392329628 +0000 UTC Jan 29 11:21:42 crc kubenswrapper[4766]: I0129 11:21:42.959534 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:20:05.86210104 +0000 UTC Jan 29 11:21:43 crc kubenswrapper[4766]: I0129 11:21:43.959670 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:27:27.397077689 +0000 UTC Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.406500 4766 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.727974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.728208 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.730225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.730285 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.730302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.731156 4766 scope.go:117] "RemoveContainer" containerID="0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231" Jan 29 11:21:44 crc kubenswrapper[4766]: E0129 11:21:44.731340 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 11:21:44 crc kubenswrapper[4766]: I0129 11:21:44.960720 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:56:06.04429374 +0000 UTC Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.089668 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.184922 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.186301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.186353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.186365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.186520 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.196108 4766 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.196441 4766 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.197790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.197832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.197849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.197868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.197877 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.214512 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.219132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.219194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.219208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.219228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.219240 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.232134 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.237284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.237335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.237347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.237368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.237383 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.250376 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.255762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.255815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.255828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.255847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.255859 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.269273 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.274059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.274129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.274144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.274166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.274182 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.285933 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.286063 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.287963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.288091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.288181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.288277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.288373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.394961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.395024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.395035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.395053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.395064 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.498467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.498539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.498552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.498569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.498579 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.601880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.601947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.601961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.601981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.601992 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.685148 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.705301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.705350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.705360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.705378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.705390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.807834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.807891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.807907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.807931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.807945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.910405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.910755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.910841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.910969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.911057 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:45Z","lastTransitionTime":"2026-01-29T11:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.917750 4766 apiserver.go:52] "Watching apiserver" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.921991 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.922703 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hppjr","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-zn4kn","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-vppxv","openshift-image-registry/node-ca-fzj49","openshift-machine-config-operator/machine-config-daemon-npgg8","openshift-multus/multus-gnk2d","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.923246 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.923285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.923522 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.923451 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.924117 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.923851 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.924256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.924348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.924549 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.924797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.924960 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.925039 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.925342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.925436 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.925613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.926812 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.929550 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.931757 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.931804 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.932216 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.932252 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.932485 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.935435 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.940114 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.940888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.941848 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.942988 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.943543 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.943756 4766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.943876 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.944337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.945087 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.945245 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.945261 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.945577 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.945671 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.947662 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.947684 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.947802 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.947928 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.948030 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.948198 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.948349 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.948479 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.948671 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949073 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949378 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949507 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949626 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949674 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949931 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949964 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.949996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950181 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950208 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950402 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950448 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950499 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950523 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950577 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950601 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950627 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950652 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950709 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950764 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950811 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950884 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950890 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950928 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950949 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.950980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951028 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951052 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951079 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951184 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951233 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951262 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951272 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951289 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951355 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951379 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951460 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951482 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951499 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951677 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951704 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951792 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951874 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.951894 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.952542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.953824 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.953937 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.954380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.954466 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.954538 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.954654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.954821 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.955080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.955330 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.955359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.955455 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.955741 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956028 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956353 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956465 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956505 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956711 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956775 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956806 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956838 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956852 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956888 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956919 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956946 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956979 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.956971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957006 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957131 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957163 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957185 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957304 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957531 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957577 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957712 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957748 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957774 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957798 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957840 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957971 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.957990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958012 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958142 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958214 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958233 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958272 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958305 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958343 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958372 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958424 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958544 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958565 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958561 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958586 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958650 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958669 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958672 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958731 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958766 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958974 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959001 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959023 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959210 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959278 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959297 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959321 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959448 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959472 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959549 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959592 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959643 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959714 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959736 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959778 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959836 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959914 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960004 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960106 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960136 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960162 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960213 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960237 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960288 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960319 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960346 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960372 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960394 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960426 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960447 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960543 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960562 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960600 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961371 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-socket-dir-parent\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961588 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-bin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft7b\" (UniqueName: \"kubernetes.io/projected/009587c0-701e-4765-bd10-2ba52a2a9016-kube-api-access-4ft7b\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961721 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xk98\" (UniqueName: \"kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-hostroot\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961866 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-multus-certs\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-proxy-tls\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-cnibin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-os-release\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963009 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cnibin\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963083 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-cni-binary-copy\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963171 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/009587c0-701e-4765-bd10-2ba52a2a9016-serviceca\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963189 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-system-cni-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9288\" (UniqueName: \"kubernetes.io/projected/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-kube-api-access-n9288\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-rootfs\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-mcd-auth-proxy-config\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963315 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963335 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963352 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-system-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-conf-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kk27\" (UniqueName: \"kubernetes.io/projected/6986483f-6521-45da-9034-8576037c32ad-kube-api-access-5kk27\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963431 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7ce22607-a7fc-47f9-8d18-a8ef1351916c-hosts-file\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963470 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963487 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-kubelet\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xqw\" (UniqueName: \"kubernetes.io/projected/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-kube-api-access-n6xqw\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963535 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963560 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-k8s-cni-cncf-io\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-multus\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963602 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-multus-daemon-config\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/009587c0-701e-4765-bd10-2ba52a2a9016-host\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gdsj\" (UniqueName: \"kubernetes.io/projected/7ce22607-a7fc-47f9-8d18-a8ef1351916c-kube-api-access-7gdsj\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963700 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963814 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-binary-copy\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963877 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-netns\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963955 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-os-release\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964008 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-etc-kubernetes\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964055 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964164 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964181 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964198 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964212 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964228 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964241 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964256 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964271 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964284 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964301 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964315 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964329 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964343 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964357 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964371 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965919 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964387 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966378 4766 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966404 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966531 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.958984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959949 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959975 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.959987 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960074 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960263 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960500 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.960615 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961008 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961253 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961367 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961802 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.961825 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962615 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962736 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.962999 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963865 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.963898 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964030 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964553 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.964887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965143 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965927 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965951 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.965848 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966455 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966489 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.966606 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:21:46.466573004 +0000 UTC m=+43.578966195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.967692 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.967759 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.967951 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966906 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966917 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.966940 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.967404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.968016 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.968340 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.968351 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.969924 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.969947 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:07:00.163027495 +0000 UTC Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.970188 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.970841 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.970908 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.970972 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.971030 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.971572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.971596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.971810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.972137 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.972224 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:46.47220444 +0000 UTC m=+43.584597871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.972293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.972321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.972914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.972925 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.973313 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.973289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.973499 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.973709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974092 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974256 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974460 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974652 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.973781 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974676 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974692 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974709 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974721 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974732 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974747 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974759 4766 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974770 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974781 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974791 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974860 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974928 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974947 4766 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974963 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.974971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975016 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975118 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975172 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.975266 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975215 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.975363 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:46.475310586 +0000 UTC m=+43.587703597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975388 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975650 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975763 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.975870 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976136 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976229 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976450 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976487 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.976985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.977054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.977728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.977785 4766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.977818 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.978069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.979892 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.980055 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.980978 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.986304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.989525 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.989767 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.989910 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990456 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990492 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990518 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990547 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990564 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990611 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:46.49055099 +0000 UTC m=+43.602944221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:45 crc kubenswrapper[4766]: E0129 11:21:45.990642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:46.490630422 +0000 UTC m=+43.603023433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.993558 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.994124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.994497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.994820 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.996192 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.996748 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.999101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.999136 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:45 crc kubenswrapper[4766]: I0129 11:21:45.999491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.000472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.002094 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.002449 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.004218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.004332 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.004673 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.004765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.005724 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.005788 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.005996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:45.999998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.008451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.008742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.008781 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009222 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009252 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009299 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009635 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.010817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.010855 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.010882 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.011356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.011651 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.012113 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.012528 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.013000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.013491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.014644 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.014667 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.009745 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.014772 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.006614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.015100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.015108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.015299 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.015389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.016309 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.017545 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.017731 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.018252 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.018431 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.018700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.018885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.019544 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.019970 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.019986 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.019969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020212 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.020530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.022273 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.022364 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.022743 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.024431 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.031293 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.033004 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.034504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.043365 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.049494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.067752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.075932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-socket-dir-parent\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.075978 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-bin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ft7b\" (UniqueName: \"kubernetes.io/projected/009587c0-701e-4765-bd10-2ba52a2a9016-kube-api-access-4ft7b\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xk98\" (UniqueName: \"kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076082 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076097 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-hostroot\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-socket-dir-parent\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-multus-certs\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076193 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-multus-certs\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-proxy-tls\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076253 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-bin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-cnibin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076298 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076354 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-os-release\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cnibin\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076461 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-cnibin\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076540 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-system-cni-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-cni-binary-copy\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/009587c0-701e-4765-bd10-2ba52a2a9016-serviceca\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-hostroot\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076910 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9288\" (UniqueName: \"kubernetes.io/projected/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-kube-api-access-n9288\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-rootfs\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-mcd-auth-proxy-config\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.076992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-system-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077096 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-conf-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077111 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kk27\" (UniqueName: \"kubernetes.io/projected/6986483f-6521-45da-9034-8576037c32ad-kube-api-access-5kk27\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7ce22607-a7fc-47f9-8d18-a8ef1351916c-hosts-file\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077181 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077196 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-kubelet\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077212 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6xqw\" (UniqueName: \"kubernetes.io/projected/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-kube-api-access-n6xqw\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077237 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-k8s-cni-cncf-io\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-k8s-cni-cncf-io\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077289 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-multus\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077313 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-multus-daemon-config\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-rootfs\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077336 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/009587c0-701e-4765-bd10-2ba52a2a9016-host\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-cni-multus\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gdsj\" (UniqueName: \"kubernetes.io/projected/7ce22607-a7fc-47f9-8d18-a8ef1351916c-kube-api-access-7gdsj\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077553 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077574 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077726 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-system-cni-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077668 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-multus-conf-dir\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077760 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-os-release\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cnibin\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077910 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-system-cni-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-mcd-auth-proxy-config\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7ce22607-a7fc-47f9-8d18-a8ef1351916c-hosts-file\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.078018 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077180 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.078054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.078088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-var-lib-kubelet\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.078737 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-multus-daemon-config\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.078948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/009587c0-701e-4765-bd10-2ba52a2a9016-host\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079448 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6986483f-6521-45da-9034-8576037c32ad-cni-binary-copy\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079482 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.079623 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/009587c0-701e-4765-bd10-2ba52a2a9016-serviceca\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.080359 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.077594 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083623 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083659 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-binary-copy\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083720 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-netns\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-os-release\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-etc-kubernetes\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083801 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083819 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083974 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.083991 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084003 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084016 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084028 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084038 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084048 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084060 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084071 4766 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084082 4766 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084093 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084102 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084112 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084122 4766 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084132 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084142 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084151 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084163 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084173 4766 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084184 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084194 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084203 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084215 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084224 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084236 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084245 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084254 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084264 4766 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084275 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084285 4766 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084294 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084304 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084313 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084323 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084336 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084345 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084356 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084370 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084379 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084389 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084399 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084422 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084432 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084442 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084453 4766 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084464 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084474 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084486 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084497 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084507 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084517 4766 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084527 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084542 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084552 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084561 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084572 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084581 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084598 4766 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084612 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084623 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084637 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084646 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084656 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084665 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084675 4766 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084685 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084693 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084708 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084719 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084729 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084739 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084749 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084759 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084834 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084845 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084854 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084863 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084872 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084881 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084893 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084902 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084913 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084923 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084932 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084940 4766 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084949 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084959 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084968 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084977 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084986 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.084995 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.085004 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.085012 4766 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.085739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-binary-copy\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.085998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.086132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.086161 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.086646 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.086864 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-etc-kubernetes\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.086958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-os-release\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087042 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6986483f-6521-45da-9034-8576037c32ad-host-run-netns\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087049 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087090 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087105 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087128 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087140 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087151 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087161 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087171 4766 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087184 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087194 4766 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087204 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087212 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087223 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087234 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087245 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087256 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087265 4766 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087275 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087286 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087295 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087305 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087315 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087325 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087335 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087345 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087354 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087365 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087374 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087384 4766 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087395 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087419 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087428 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087440 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087450 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087461 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087471 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087481 4766 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087491 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087501 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087511 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087521 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087531 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087543 4766 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087552 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087562 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087573 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087582 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087591 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087601 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087611 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087621 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087631 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087641 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087652 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087662 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087671 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087681 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087692 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087703 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087714 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087724 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087734 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087746 4766 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087757 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087767 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087777 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087788 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087804 4766 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.087814 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.090595 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.091391 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.093575 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-proxy-tls\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.094488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kk27\" (UniqueName: \"kubernetes.io/projected/6986483f-6521-45da-9034-8576037c32ad-kube-api-access-5kk27\") pod \"multus-gnk2d\" (UID: \"6986483f-6521-45da-9034-8576037c32ad\") " pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.094788 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xk98\" (UniqueName: \"kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98\") pod \"ovnkube-node-zn4kn\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.095343 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ft7b\" (UniqueName: \"kubernetes.io/projected/009587c0-701e-4765-bd10-2ba52a2a9016-kube-api-access-4ft7b\") pod \"node-ca-fzj49\" (UID: \"009587c0-701e-4765-bd10-2ba52a2a9016\") " pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.097449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gdsj\" (UniqueName: \"kubernetes.io/projected/7ce22607-a7fc-47f9-8d18-a8ef1351916c-kube-api-access-7gdsj\") pod \"node-resolver-vppxv\" (UID: \"7ce22607-a7fc-47f9-8d18-a8ef1351916c\") " pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.097566 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9288\" (UniqueName: \"kubernetes.io/projected/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-kube-api-access-n9288\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.100981 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.102838 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6xqw\" (UniqueName: \"kubernetes.io/projected/5bdd08bb-d32c-44f7-b7f8-ff1664ea543a-kube-api-access-n6xqw\") pod \"machine-config-daemon-npgg8\" (UID: \"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\") " pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.113526 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.116943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hppjr\" (UID: \"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\") " pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.123056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.123106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.123116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.123134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.123145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.130346 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.140997 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.150193 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.160273 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.171208 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.226750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.226796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.226807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.226826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.226838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.241095 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.258193 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 11:21:46 crc kubenswrapper[4766]: W0129 11:21:46.277180 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-e50a4c508168be1c39d623f46e90b2dfdf6e10c0a428ec1329f9436052a2f730 WatchSource:0}: Error finding container e50a4c508168be1c39d623f46e90b2dfdf6e10c0a428ec1329f9436052a2f730: Status 404 returned error can't find the container with id e50a4c508168be1c39d623f46e90b2dfdf6e10c0a428ec1329f9436052a2f730 Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.279840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 11:21:46 crc kubenswrapper[4766]: W0129 11:21:46.292261 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-de3a7385e83dab848c14331b1449e307f957481ffe5b44e66d23d9d7c502ce14 WatchSource:0}: Error finding container de3a7385e83dab848c14331b1449e307f957481ffe5b44e66d23d9d7c502ce14: Status 404 returned error can't find the container with id de3a7385e83dab848c14331b1449e307f957481ffe5b44e66d23d9d7c502ce14 Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.329703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.329746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.329759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.329782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.329797 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.332282 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vppxv" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.347048 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fzj49" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.360664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.362503 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hppjr" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.368849 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.375369 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnk2d" Jan 29 11:21:46 crc kubenswrapper[4766]: W0129 11:21:46.404009 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009587c0_701e_4765_bd10_2ba52a2a9016.slice/crio-6dc720451a8b1e4669a6d65491186a3816bc00037b8fe0e42164169d31378e22 WatchSource:0}: Error finding container 6dc720451a8b1e4669a6d65491186a3816bc00037b8fe0e42164169d31378e22: Status 404 returned error can't find the container with id 6dc720451a8b1e4669a6d65491186a3816bc00037b8fe0e42164169d31378e22 Jan 29 11:21:46 crc kubenswrapper[4766]: W0129 11:21:46.405277 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bdd08bb_d32c_44f7_b7f8_ff1664ea543a.slice/crio-daaa78c1d94f69327ce74cf129c2cee285c929dfd2ce11ed2ebdfdafbfeb6333 WatchSource:0}: Error finding container daaa78c1d94f69327ce74cf129c2cee285c929dfd2ce11ed2ebdfdafbfeb6333: Status 404 returned error can't find the container with id daaa78c1d94f69327ce74cf129c2cee285c929dfd2ce11ed2ebdfdafbfeb6333 Jan 29 11:21:46 crc kubenswrapper[4766]: W0129 11:21:46.425492 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9fe5f65_adbd_48b9_aa58_dc26c6bb32dc.slice/crio-b53e855601cf413fe6271c1f9536153bcbfac26059b1c8744b4b05f11b8dd574 WatchSource:0}: Error finding container b53e855601cf413fe6271c1f9536153bcbfac26059b1c8744b4b05f11b8dd574: Status 404 returned error can't find the container with id b53e855601cf413fe6271c1f9536153bcbfac26059b1c8744b4b05f11b8dd574 Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.435163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.435230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.435247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.435268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.435281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.491090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.491198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.491237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.491258 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.491279 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491397 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491446 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491458 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491510 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:47.491492258 +0000 UTC m=+44.603885269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491893 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:21:47.491882598 +0000 UTC m=+44.604275609 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491962 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491974 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.491983 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.492007 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:47.492001062 +0000 UTC m=+44.604394073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.492062 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.492082 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:47.492077034 +0000 UTC m=+44.604470035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.492083 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: E0129 11:21:46.492188 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:47.492159676 +0000 UTC m=+44.604552687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.537787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.537835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.537848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.537867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.537881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.642488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.642562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.642588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.642622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.642650 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.746528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.746618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.746640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.746669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.746692 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.758332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"51c0561a50d80ebb07b96621ab6f023f974ec9fde3f77ac8d8aaba3d020ea029"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.759542 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"f2cae48be25a036d875e619bf77b27b1a838220c53510580128157398d687d9c"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.760672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerStarted","Data":"e270128a15cfa797559b857e247c9035f2160cea58523fff3ed4e471af0d3b1b"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.762068 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fzj49" event={"ID":"009587c0-701e-4765-bd10-2ba52a2a9016","Type":"ContainerStarted","Data":"6dc720451a8b1e4669a6d65491186a3816bc00037b8fe0e42164169d31378e22"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.763451 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vppxv" event={"ID":"7ce22607-a7fc-47f9-8d18-a8ef1351916c","Type":"ContainerStarted","Data":"bd7b872117f00f453fc714d6aefe02d42f1c0d7a240bdfffd79ad03182303df8"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.764821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"b53e855601cf413fe6271c1f9536153bcbfac26059b1c8744b4b05f11b8dd574"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.765775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"daaa78c1d94f69327ce74cf129c2cee285c929dfd2ce11ed2ebdfdafbfeb6333"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.767256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"de3a7385e83dab848c14331b1449e307f957481ffe5b44e66d23d9d7c502ce14"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.768936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e50a4c508168be1c39d623f46e90b2dfdf6e10c0a428ec1329f9436052a2f730"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.850111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.850189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.850216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.850252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.850277 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.953201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.953250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.953262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.953280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.953293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:46Z","lastTransitionTime":"2026-01-29T11:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:46 crc kubenswrapper[4766]: I0129 11:21:46.971110 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:41:26.194534612 +0000 UTC Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.055734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.055797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.055817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.055843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.055863 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.158983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.159028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.159036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.159050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.159060 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.223685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.223832 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.228755 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.229850 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.231494 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.232572 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.234046 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.234857 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.235694 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.237108 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.238089 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.239772 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.240668 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.242194 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.242955 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.243742 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.245371 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.246288 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.247582 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.248019 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.248669 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.249905 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.250532 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.251785 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.252270 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.253541 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.254309 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.254983 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.256226 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.256806 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.257938 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.258563 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.259583 4766 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.259701 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.261679 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262761 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.262809 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.263505 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.265358 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.266574 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.267800 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.268649 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.269963 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.270540 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.271730 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.272455 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.273628 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.274185 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.275283 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.275965 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.277255 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.277853 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.279019 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.279628 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.280961 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.281715 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.282333 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.367216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.367253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.367271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.367299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.367318 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.470027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.470346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.470356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.470373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.470385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.501762 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.501880 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:21:49.501852898 +0000 UTC m=+46.614245909 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.501913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.501942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.501968 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.501990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502069 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502088 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502101 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502112 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502163 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502174 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502365 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502387 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502112 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:49.502101165 +0000 UTC m=+46.614494176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502471 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:49.502445685 +0000 UTC m=+46.614838866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502497 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:49.502486916 +0000 UTC m=+46.614880127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:47 crc kubenswrapper[4766]: E0129 11:21:47.502524 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:49.502506277 +0000 UTC m=+46.614899498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.573881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.573937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.573950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.573983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.573997 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.676902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.676948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.676966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.676985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.676997 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.774172 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.775928 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.780760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.780810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.780821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.780839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.780849 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.884711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.884772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.884789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.884813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.884829 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.971348 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:27:30.062155072 +0000 UTC Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.989041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.989105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.989123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.989147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:47 crc kubenswrapper[4766]: I0129 11:21:47.989166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:47Z","lastTransitionTime":"2026-01-29T11:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.092284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.092334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.092344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.092363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.092374 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.195558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.195605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.195615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.195634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.195645 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.224385 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.224473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:48 crc kubenswrapper[4766]: E0129 11:21:48.224572 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:48 crc kubenswrapper[4766]: E0129 11:21:48.224639 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.298437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.298500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.298513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.298543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.298558 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.401128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.401181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.401196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.401217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.401234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.504680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.504733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.504745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.504767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.504782 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.612917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.612971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.612982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.613000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.613013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.716549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.716608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.716619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.716641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.716653 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.781913 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" exitCode=0 Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.781998 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.787741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.791105 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.794009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.794057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.795689 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerStarted","Data":"a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.797050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fzj49" event={"ID":"009587c0-701e-4765-bd10-2ba52a2a9016","Type":"ContainerStarted","Data":"bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.802313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vppxv" event={"ID":"7ce22607-a7fc-47f9-8d18-a8ef1351916c","Type":"ContainerStarted","Data":"ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.802981 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.809087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.822139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.822199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.822213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.822236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.822249 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.825800 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.837234 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.850622 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.864511 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.878174 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.896735 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.912764 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.927829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.927876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.927887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.927912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.927926 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:48Z","lastTransitionTime":"2026-01-29T11:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.936123 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.951338 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.967737 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.971968 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:59:36.472083999 +0000 UTC Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.982704 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:48 crc kubenswrapper[4766]: I0129 11:21:48.995860 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.008358 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.024479 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.030437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.030497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.030511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.030533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.030574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.039170 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.054360 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.064609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.076303 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.087469 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.102185 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.114965 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.133679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.134050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.134155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.134086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.134266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.134454 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.155517 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.224464 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.224635 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.237159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.237220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.237233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.237253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.237269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.339872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.340202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.340215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.340245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.340263 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.443765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.444109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.444198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.444292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.444393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.524675 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.524873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.524937 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:21:53.524899904 +0000 UTC m=+50.637292905 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.525006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525024 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525101 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:53.525081349 +0000 UTC m=+50.637474560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.525128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.525162 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525215 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525273 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525290 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525304 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525340 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:53.525307555 +0000 UTC m=+50.637700736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525371 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:53.525358097 +0000 UTC m=+50.637751118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525379 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525397 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525423 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:49 crc kubenswrapper[4766]: E0129 11:21:49.525453 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:53.525444629 +0000 UTC m=+50.637837870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.547924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.547973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.547986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.548006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.548019 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.651021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.651068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.651098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.651115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.651127 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.754185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.754221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.754231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.754246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.754257 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.816403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.816479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.816490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.818313 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b" exitCode=0 Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.818383 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.820837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.833331 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.854971 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.856789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.856809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.856820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.856836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.856847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.882655 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.896271 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.910637 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.929942 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.942110 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.960502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.960547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.960556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.960578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.960588 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:49Z","lastTransitionTime":"2026-01-29T11:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.963202 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.972960 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:32:44.875002215 +0000 UTC Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.979959 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:49 crc kubenswrapper[4766]: I0129 11:21:49.998731 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.013590 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.033141 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.054250 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.067772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.067833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.067846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.067868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.067882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.069747 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.088195 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.113086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.137110 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.159716 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.171066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.171156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.171171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.171194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.171207 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.176823 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.193080 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.209440 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.223949 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.224219 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:50 crc kubenswrapper[4766]: E0129 11:21:50.224273 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:50 crc kubenswrapper[4766]: E0129 11:21:50.224771 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.224563 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.243220 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.259346 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.273965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.274007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.274020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.274039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.274052 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.377151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.377197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.377207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.377223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.377234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.480608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.480668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.480678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.480704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.480715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.584318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.584391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.584402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.584443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.584457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.686751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.686814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.686828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.686854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.686901 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.789349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.789401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.789432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.789455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.789469 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.830592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.830662 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.830681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.832884 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.854677 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.870260 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.889855 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.892404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.892482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.892498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.892518 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.892532 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.916213 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.939568 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.955917 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.973337 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:49:28.888873399 +0000 UTC Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.978648 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.992038 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.996062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.996116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.996131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.996156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:50 crc kubenswrapper[4766]: I0129 11:21:50.996172 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:50Z","lastTransitionTime":"2026-01-29T11:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.007854 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.025005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.042960 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.062476 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.099742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.099966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.100045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.100164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.100257 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.205294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.205339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.205350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.205374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.205385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.223503 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:51 crc kubenswrapper[4766]: E0129 11:21:51.223736 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.308127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.308174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.308186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.308207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.308220 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.411273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.411634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.411728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.411817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.411956 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.412971 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm"] Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.413535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.415463 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.415854 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.438687 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.453797 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.475514 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.499775 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.514896 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.515014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.515050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.515060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.515080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.515091 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.531177 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.547645 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.551319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p4c\" (UniqueName: \"kubernetes.io/projected/b907fc44-f3fb-43b4-86e2-60d1379c3b26-kube-api-access-d8p4c\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.551402 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.551493 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.551526 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.571626 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.589166 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.605588 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.623599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.623649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.623662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.623680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.623693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.628180 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.643572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.652725 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p4c\" (UniqueName: \"kubernetes.io/projected/b907fc44-f3fb-43b4-86e2-60d1379c3b26-kube-api-access-d8p4c\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.652784 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.652834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.652854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.653721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.654014 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b907fc44-f3fb-43b4-86e2-60d1379c3b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.660881 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:51Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.661878 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b907fc44-f3fb-43b4-86e2-60d1379c3b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.674879 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p4c\" (UniqueName: \"kubernetes.io/projected/b907fc44-f3fb-43b4-86e2-60d1379c3b26-kube-api-access-d8p4c\") pod \"ovnkube-control-plane-749d76644c-dc6zm\" (UID: \"b907fc44-f3fb-43b4-86e2-60d1379c3b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.727847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.727894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.727922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.727948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.727963 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.750474 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" Jan 29 11:21:51 crc kubenswrapper[4766]: W0129 11:21:51.773265 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb907fc44_f3fb_43b4_86e2_60d1379c3b26.slice/crio-997119e47ef634ff020dc3e8339e175fb608ab78b361344a4189904bb9e4ae14 WatchSource:0}: Error finding container 997119e47ef634ff020dc3e8339e175fb608ab78b361344a4189904bb9e4ae14: Status 404 returned error can't find the container with id 997119e47ef634ff020dc3e8339e175fb608ab78b361344a4189904bb9e4ae14 Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.831924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.831977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.831996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.832021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.832038 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.836605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" event={"ID":"b907fc44-f3fb-43b4-86e2-60d1379c3b26","Type":"ContainerStarted","Data":"997119e47ef634ff020dc3e8339e175fb608ab78b361344a4189904bb9e4ae14"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.934949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.935003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.935016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.935037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.935051 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:51Z","lastTransitionTime":"2026-01-29T11:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:51 crc kubenswrapper[4766]: I0129 11:21:51.974116 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:01:15.548824971 +0000 UTC Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.037817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.037866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.037879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.037899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.037912 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.140863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.140920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.140932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.140951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.140963 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.224295 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:52 crc kubenswrapper[4766]: E0129 11:21:52.224495 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.224984 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:52 crc kubenswrapper[4766]: E0129 11:21:52.225044 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.251066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.251115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.251127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.251147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.251160 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.354616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.354671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.354683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.354702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.354715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.457994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.458089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.458105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.458131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.458151 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.561375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.561454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.561465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.561484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.561496 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.664074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.664137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.664149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.664172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.664185 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.767760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.767806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.767817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.767835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.767847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.842364 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" event={"ID":"b907fc44-f3fb-43b4-86e2-60d1379c3b26","Type":"ContainerStarted","Data":"17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.842444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" event={"ID":"b907fc44-f3fb-43b4-86e2-60d1379c3b26","Type":"ContainerStarted","Data":"8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.844851 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b" exitCode=0 Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.844886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.860651 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.870453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.870498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.870514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.870538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.870553 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.876035 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.890629 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.909668 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.927343 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.949504 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.963984 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.973203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.973251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.973263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.973287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.973299 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:52Z","lastTransitionTime":"2026-01-29T11:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.975472 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:57:01.561273839 +0000 UTC Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.980990 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:52 crc kubenswrapper[4766]: I0129 11:21:52.996360 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:52Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.014080 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.030095 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.048572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.064692 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.076943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.076979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.076987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.077003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.077013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.080243 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.092976 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.106606 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.120400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.134621 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.150060 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.166583 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180705 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.180973 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.192183 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.206082 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.217750 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.224548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.224748 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.232625 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.252129 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.283567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.283609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.283619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.283637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.283649 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.351771 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xrjg5"] Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.352256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.352324 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.365770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.382896 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.386900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.386974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.386992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.387023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.387040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.400525 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.416327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.431839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.447151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.461009 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.473536 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.476751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcwz\" (UniqueName: \"kubernetes.io/projected/3910984a-a754-462f-9414-183a50bb78b8-kube-api-access-2mcwz\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.476806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.488008 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.489976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.490017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.490028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.490049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.490064 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.502238 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.521467 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.537956 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.552083 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.567982 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:53Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578529 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcwz\" (UniqueName: \"kubernetes.io/projected/3910984a-a754-462f-9414-183a50bb78b8-kube-api-access-2mcwz\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.578706 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.578675389 +0000 UTC m=+58.691068400 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578774 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.578859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.578951 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.578981 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.578988 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579009 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579004 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579027 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:54.079010878 +0000 UTC m=+51.191403889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579025 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579109 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579148 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579131 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.579108551 +0000 UTC m=+58.691501562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579166 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579181 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.579169223 +0000 UTC m=+58.691562234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579200 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.579191943 +0000 UTC m=+58.691584944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:53 crc kubenswrapper[4766]: E0129 11:21:53.579245 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.579219734 +0000 UTC m=+58.691612745 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.594953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.595010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.595025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.595070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.595088 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.606488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcwz\" (UniqueName: \"kubernetes.io/projected/3910984a-a754-462f-9414-183a50bb78b8-kube-api-access-2mcwz\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.698817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.698877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.698889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.698908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.698920 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.801188 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.801254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.801267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.801287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.801300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.851696 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.853943 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.904548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.904594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.904605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.904623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.904635 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:53Z","lastTransitionTime":"2026-01-29T11:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:53 crc kubenswrapper[4766]: I0129 11:21:53.976716 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:23:55.648846967 +0000 UTC Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.007462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.007531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.007546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.007571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.007592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.086668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:54 crc kubenswrapper[4766]: E0129 11:21:54.086918 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:54 crc kubenswrapper[4766]: E0129 11:21:54.087044 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:55.087012642 +0000 UTC m=+52.199405813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.110603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.110662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.110674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.110699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.110716 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.213427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.213483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.213502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.213521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.213534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.224288 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.224394 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:54 crc kubenswrapper[4766]: E0129 11:21:54.224711 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:54 crc kubenswrapper[4766]: E0129 11:21:54.224848 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.316769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.316830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.316840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.316855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.316866 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.419511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.419571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.419590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.419612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.419626 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.522314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.522362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.522374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.522389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.522399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.625836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.625907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.625923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.625949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.625965 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.728800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.728836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.728847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.728867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.728885 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.832048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.832109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.832122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.832142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.832154 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.864052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.864522 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.864670 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.864702 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.866736 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49" exitCode=0 Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.866824 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.884724 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.907627 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.910402 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.918638 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.926665 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.935172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.935277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.935286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.935306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.935317 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:54Z","lastTransitionTime":"2026-01-29T11:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.946881 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.966567 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.977259 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:56:46.84022581 +0000 UTC Jan 29 11:21:54 crc kubenswrapper[4766]: I0129 11:21:54.991316 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:54Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.030990 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.038467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.038528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.038539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.038558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.038568 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.055590 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.075659 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.101442 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.102927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.103142 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.103256 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:21:57.103224895 +0000 UTC m=+54.215618096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.126909 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.141893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.141944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.141958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.141981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.141994 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.153526 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.179342 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.193595 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.220837 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.223974 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.224005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.224118 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.224374 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.245256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.245305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.245318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.245341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.245355 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.246121 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.270385 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.290899 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.319819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.348325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.348386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.348399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.348439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.348454 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.353693 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.373786 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.396738 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.420864 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.441718 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.451007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.451075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.451092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.451113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.451126 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.461337 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.480977 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.494849 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.511356 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.539727 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554678 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.554932 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.573805 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.592460 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.607586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.621304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.621361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.621378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.621400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.621442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.622645 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.636608 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.642205 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.645190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.645221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.645234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.645252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.645264 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.666089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.666489 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.676870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.676924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.676937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.676958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.676972 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.683862 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.693116 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.699048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.699119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.699133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.699154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.699172 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.702588 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.718328 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.718591 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.723869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.723937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.723950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.723969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.723980 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.736079 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.745177 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: E0129 11:21:55.745333 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.747171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.747217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.747231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.747253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.747269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.757761 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.774988 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.850114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.850163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.850172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.850204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.850218 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.873182 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439" exitCode=0 Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.873491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.889846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.904350 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.917324 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.932253 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.946486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.953125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.953173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.953182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.953198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.953213 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:55Z","lastTransitionTime":"2026-01-29T11:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.969601 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.977735 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:22:01.241432276 +0000 UTC Jan 29 11:21:55 crc kubenswrapper[4766]: I0129 11:21:55.991640 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.007010 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.021655 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.040261 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.057122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.057163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.057174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.057221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.057234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.058782 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.075638 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.096279 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.114356 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.160138 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.160199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.160211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.160233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.160245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.224339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.224448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:56 crc kubenswrapper[4766]: E0129 11:21:56.224494 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:56 crc kubenswrapper[4766]: E0129 11:21:56.224595 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.263965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.264012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.264024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.264041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.264054 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.279661 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.295455 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.295815 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.314251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.337875 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.353862 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.366460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.366512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.366523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.366542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.366555 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.371019 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.389809 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.415169 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.437163 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.452578 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.465707 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.469888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.469920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.469928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.469943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.469953 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.487970 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.505819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.519096 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.535736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.573429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.573483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.573496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.573515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.573528 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.676579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.676627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.676656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.676674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.676688 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.778867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.778899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.778908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.778923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.778933 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.880756 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.898335 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.914886 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.931206 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.944890 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.962581 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.980507 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:21:32.218598295 +0000 UTC Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.983850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.983931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.983783 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.983946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.984086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:56 crc kubenswrapper[4766]: I0129 11:21:56.984103 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:56Z","lastTransitionTime":"2026-01-29T11:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.002882 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:56Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.020496 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.048464 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.067007 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.081908 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.086935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.086978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.086988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.087007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.087017 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.096736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.113280 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.125105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:57 crc kubenswrapper[4766]: E0129 11:21:57.125269 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:57 crc kubenswrapper[4766]: E0129 11:21:57.125343 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:01.125321764 +0000 UTC m=+58.237714775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.127005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.147309 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.189589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.189634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.189643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.189659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.189671 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.224123 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.224357 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:57 crc kubenswrapper[4766]: E0129 11:21:57.224477 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:57 crc kubenswrapper[4766]: E0129 11:21:57.224585 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.244508 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.244660 4766 scope.go:117] "RemoveContainer" containerID="0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.292247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.292282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.292292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.292311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.292323 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.395391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.395441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.395451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.395467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.395478 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.498083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.498118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.498126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.498142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.498152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.600990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.601043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.601054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.601072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.601083 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.705781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.705820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.705830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.705848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.705859 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.813201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.813889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.813907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.813925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.813936 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.885972 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.889738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.917044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.917105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.917114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.917136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.917146 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:57Z","lastTransitionTime":"2026-01-29T11:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.951945 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.980847 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:50:57.916347797 +0000 UTC Jan 29 11:21:57 crc kubenswrapper[4766]: I0129 11:21:57.981853 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.020144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.020215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.020227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.020250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.020264 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.026825 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.048869 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.070403 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.088911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.111604 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.122458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.122503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.122514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.122530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.122541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.129682 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.145301 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.157928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.180656 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.199902 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.215198 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.223376 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.223493 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:21:58 crc kubenswrapper[4766]: E0129 11:21:58.223520 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:21:58 crc kubenswrapper[4766]: E0129 11:21:58.223651 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.224806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.224854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.224870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.224887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.224897 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.235520 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.255186 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.273264 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.327715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.327754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.327763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.327780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.327790 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.430149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.430190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.430200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.430217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.430227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.533087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.533140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.533151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.533168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.533179 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.637060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.637115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.637127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.637147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.637162 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.739581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.739634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.739649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.739671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.739685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.841761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.841794 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.841803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.841820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.841829 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.897910 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e" exitCode=0 Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.898006 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.898294 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.916961 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.937963 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.943725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.943767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.943782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.943802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.943817 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:58Z","lastTransitionTime":"2026-01-29T11:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.954796 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.977628 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.981637 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:10:12.644565914 +0000 UTC Jan 29 11:21:58 crc kubenswrapper[4766]: I0129 11:21:58.989708 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:58Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.007901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.027766 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.048453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.048504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.048519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.048539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.048551 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.049723 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.065293 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.081867 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.099238 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.118178 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.132296 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.151678 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.152497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.152588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.152602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.152617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.152629 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.169534 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.184237 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.223702 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.223819 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:21:59 crc kubenswrapper[4766]: E0129 11:21:59.223854 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:21:59 crc kubenswrapper[4766]: E0129 11:21:59.224000 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.255533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.255590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.255603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.255624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.255637 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.357993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.358031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.358041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.358059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.358069 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.461068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.461106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.461116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.461134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.461144 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.564640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.564696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.564709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.564728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.564740 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.667246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.667316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.667329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.667351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.667363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.770085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.770143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.770156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.770176 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.770186 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.872216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.872269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.872287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.872305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.872315 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.902396 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/0.log" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.905153 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495" exitCode=1 Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.905265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.906257 4766 scope.go:117] "RemoveContainer" containerID="48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.908515 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc" containerID="e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b" exitCode=0 Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.908625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerDied","Data":"e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.922795 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.936780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.955871 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.975281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.975314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.975324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.975338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.975550 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:21:59Z","lastTransitionTime":"2026-01-29T11:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.980203 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.982182 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:17:30.547126279 +0000 UTC Jan 29 11:21:59 crc kubenswrapper[4766]: I0129 11:21:59.994173 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:21:59Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.011852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.028335 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.044637 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.061477 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.078924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.078972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.078985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.079005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.079018 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.081405 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.096351 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.112016 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.125607 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.141996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.157197 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.172781 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.181312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.181362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.181371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.181388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.181398 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.196780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.213737 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.224980 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:00 crc kubenswrapper[4766]: E0129 11:22:00.225147 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.225229 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:00 crc kubenswrapper[4766]: E0129 11:22:00.225338 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.230050 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.244945 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.259116 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.277013 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.283755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.283791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.283803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.283818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.283827 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.299962 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.314305 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.330890 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.345904 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.366517 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.384558 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.386504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.386553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.386566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.386586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.386599 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.400573 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.417627 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.431570 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.447174 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:00Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.488891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.488938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.488948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.488969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.488979 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.594084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.594130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.594143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.594160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.594172 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.697034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.697097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.697108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.697124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.697134 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.800269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.800327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.800344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.800366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.800381 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.903356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.903446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.903458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.903475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.903488 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:00Z","lastTransitionTime":"2026-01-29T11:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.914660 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/0.log" Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.916911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8"} Jan 29 11:22:00 crc kubenswrapper[4766]: I0129 11:22:00.983206 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:35:37.841427199 +0000 UTC Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.006528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.006577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.006590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.006609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.006621 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.109266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.109298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.109306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.109320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.109332 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.167006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.167202 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.167320 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:09.167288211 +0000 UTC m=+66.279681362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.211960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.212013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.212030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.212054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.212071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.224260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.224387 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.224520 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.224597 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.314525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.314564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.314574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.314590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.314602 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.418080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.418127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.418140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.418157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.418169 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.520254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.520300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.520314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.520332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.520343 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.623360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.623441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.623454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.623474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.623485 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.677960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.678125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.678166 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.678189 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.678210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678277 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:22:17.678242277 +0000 UTC m=+74.790635298 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678332 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678372 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678403 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678456 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678441 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:17.678427412 +0000 UTC m=+74.790820423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678513 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678584 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678601 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678536 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:17.678513404 +0000 UTC m=+74.790906595 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678691 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:17.678678459 +0000 UTC m=+74.791071470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678728 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: E0129 11:22:01.678791 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:17.678764231 +0000 UTC m=+74.791157262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.727168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.727223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.727234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.727254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.727275 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.831134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.831201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.831220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.831242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.831256 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.925005 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" event={"ID":"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc","Type":"ContainerStarted","Data":"205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.925495 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.934027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.934088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.934104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.934127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.934141 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:01Z","lastTransitionTime":"2026-01-29T11:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.945644 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:01Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.971865 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:01Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.984384 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:00:07.768831672 +0000 UTC Jan 29 11:22:01 crc kubenswrapper[4766]: I0129 11:22:01.988845 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:01Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.006178 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.024048 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.037449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.037531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.037546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.037566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.037589 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.038599 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.054876 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.073586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.089804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.103525 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.122146 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.137916 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.139484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.139517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.139525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.139542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.139555 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.153026 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.166650 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.178592 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.192739 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.204880 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.223094 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.223633 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:02 crc kubenswrapper[4766]: E0129 11:22:02.223787 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.223913 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:02 crc kubenswrapper[4766]: E0129 11:22:02.224059 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.241798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.241853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.241869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.241892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.241906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.243946 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.256596 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.272051 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.288280 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.304151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.316584 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.333163 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.344928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.344979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.344990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.345010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.345024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.350304 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.368362 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.385461 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.402761 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.418537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.434665 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.448363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.448424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.448440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.448458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.448469 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.453242 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:02Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.551034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.551070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.551079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.551096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.551105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.654580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.654637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.654649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.654673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.654689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.757289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.757342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.757352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.757369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.757379 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.860684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.860748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.860774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.860799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.860814 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.963932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.963982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.964000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.964020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.964033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:02Z","lastTransitionTime":"2026-01-29T11:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:02 crc kubenswrapper[4766]: I0129 11:22:02.985540 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:17:56.090163549 +0000 UTC Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.067250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.067284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.067293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.067308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.067320 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.169757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.169802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.169811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.169833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.169842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.223459 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.223555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:03 crc kubenswrapper[4766]: E0129 11:22:03.223624 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:03 crc kubenswrapper[4766]: E0129 11:22:03.223726 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.272458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.272533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.272550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.272575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.272592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.375251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.375302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.375312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.375331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.375341 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.478492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.478545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.478554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.478573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.478585 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.581764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.581818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.581827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.581852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.581864 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.685386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.685453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.685464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.685483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.685493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.788141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.788507 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.788600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.788864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.788974 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.892153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.892225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.892237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.892256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.892270 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.986148 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:07:56.081555634 +0000 UTC Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.995684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.995741 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.995752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.995769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:03 crc kubenswrapper[4766]: I0129 11:22:03.995780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:03Z","lastTransitionTime":"2026-01-29T11:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.098339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.098377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.098385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.098399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.098429 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.200532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.200594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.200608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.200628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.200641 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.224553 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.224664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:04 crc kubenswrapper[4766]: E0129 11:22:04.224712 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:04 crc kubenswrapper[4766]: E0129 11:22:04.224807 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.303349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.303398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.303428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.303447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.303457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.406022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.406111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.406124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.406142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.406153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.509378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.509450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.509462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.509482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.509492 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.612624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.612691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.612706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.612735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.612753 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.716139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.716200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.716213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.716232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.716279 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.818659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.818719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.818734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.818752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.818766 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.921776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.921843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.921856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.921874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.921885 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:04Z","lastTransitionTime":"2026-01-29T11:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.936397 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/1.log" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.936908 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/0.log" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.939332 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8" exitCode=1 Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.939384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8"} Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.939463 4766 scope.go:117] "RemoveContainer" containerID="48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.940832 4766 scope.go:117] "RemoveContainer" containerID="30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8" Jan 29 11:22:04 crc kubenswrapper[4766]: E0129 11:22:04.941234 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.960620 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:04Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.984823 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:04Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:04 crc kubenswrapper[4766]: I0129 11:22:04.987332 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:58:51.049988271 +0000 UTC Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:04.999861 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:04Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.015907 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.025556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.025732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.025748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.025771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.025784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.034792 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.051283 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.069369 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.087469 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.104818 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.119756 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.128213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.128245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.128254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.128269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.128279 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.139505 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.154297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.171448 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.187881 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.202172 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.217442 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.223738 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:05 crc kubenswrapper[4766]: E0129 11:22:05.223894 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.224142 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:05 crc kubenswrapper[4766]: E0129 11:22:05.224371 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.230322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.230352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.230361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.230374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.230384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.243486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.259945 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.278138 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.294520 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.318252 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333263 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.333527 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.347471 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.365493 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.385539 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.402262 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.420791 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.435934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.435993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.436007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.436026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.436040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.438601 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.456727 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.474181 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.488911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.504179 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.538815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.538862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.538876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.538894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.538906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.643429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.643487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.643502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.643521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.643536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.747086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.747151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.747166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.747186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.747199 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.849426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.849842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.849857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.849877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.849890 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.943472 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/1.log" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.952452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.952483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.952494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.952510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.952520 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.971037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.971082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.971093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.971110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.971122 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.990244 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:15:31.026972816 +0000 UTC Jan 29 11:22:05 crc kubenswrapper[4766]: E0129 11:22:05.990495 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.995503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.995564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.995582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.995604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:05 crc kubenswrapper[4766]: I0129 11:22:05.995618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:05Z","lastTransitionTime":"2026-01-29T11:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.011018 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.016287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.016432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.016448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.016468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.016479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.031141 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.036611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.036665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.036678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.036700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.036714 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.049603 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.054359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.054442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.054460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.054481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.054495 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.067526 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.067680 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.069756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.069799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.069810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.069828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.069839 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.172574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.172626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.172637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.172656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.172669 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.223984 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.224125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.224174 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:06 crc kubenswrapper[4766]: E0129 11:22:06.224270 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.276259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.276321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.276333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.276353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.276369 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.378890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.378938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.378947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.378965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.378977 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.481570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.481625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.481633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.481650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.481661 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.583800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.583851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.583863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.583880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.583892 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.686666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.686715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.686727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.686745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.686761 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.789572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.789615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.789627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.789644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.789656 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.892130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.892170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.892179 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.892194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.892205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.990721 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:57:48.183442052 +0000 UTC Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.995032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.995090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.995101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.995118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:06 crc kubenswrapper[4766]: I0129 11:22:06.995128 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:06Z","lastTransitionTime":"2026-01-29T11:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.098308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.098383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.098400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.098434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.098447 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.201172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.201226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.201237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.201258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.201270 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.223496 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:07 crc kubenswrapper[4766]: E0129 11:22:07.223678 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.223743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:07 crc kubenswrapper[4766]: E0129 11:22:07.223943 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.304095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.304140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.304150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.304169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.304181 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.407562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.407619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.407630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.407647 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.407660 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.510656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.510702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.510716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.510739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.510751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.613573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.613625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.613637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.613654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.613665 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.716474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.716530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.716540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.716567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.716589 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.819588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.819649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.819662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.819683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.819696 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.923014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.923061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.923075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.923099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.923113 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:07Z","lastTransitionTime":"2026-01-29T11:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:07 crc kubenswrapper[4766]: I0129 11:22:07.991286 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:28:49.877789296 +0000 UTC Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.026801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.026844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.026854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.026875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.026892 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.129990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.130048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.130062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.130083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.130098 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.223631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.223700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:08 crc kubenswrapper[4766]: E0129 11:22:08.224475 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:08 crc kubenswrapper[4766]: E0129 11:22:08.224574 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.232463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.232893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.233009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.233119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.233216 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.336679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.336744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.336757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.336776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.336793 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.440045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.440109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.440127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.440152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.440172 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.543396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.543506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.543526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.543561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.543581 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.646693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.646737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.646753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.646772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.646784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.749570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.749621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.749630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.749646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.749659 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.852033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.852331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.852397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.852487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.852553 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.956047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.956105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.956118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.956139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.956151 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:08Z","lastTransitionTime":"2026-01-29T11:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:08 crc kubenswrapper[4766]: I0129 11:22:08.991846 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:21:13.558896378 +0000 UTC Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.059169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.059211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.059225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.059246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.059261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.166772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.166815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.166829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.166850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.166864 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.223665 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:09 crc kubenswrapper[4766]: E0129 11:22:09.223840 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.224381 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:09 crc kubenswrapper[4766]: E0129 11:22:09.224617 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.261192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:09 crc kubenswrapper[4766]: E0129 11:22:09.261337 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:09 crc kubenswrapper[4766]: E0129 11:22:09.261455 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:25.261383219 +0000 UTC m=+82.373776280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.270342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.270403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.270636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.270656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.270667 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.373615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.373658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.373670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.373687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.373700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.476120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.476150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.476159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.476176 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.476188 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.580755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.580802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.580817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.580838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.580856 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.684260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.684308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.684318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.684336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.684345 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.786853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.786900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.786909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.786927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.786937 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.889901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.889946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.889954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.889971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.889982 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992616 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:05:55.134295907 +0000 UTC Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:09 crc kubenswrapper[4766]: I0129 11:22:09.992960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:09Z","lastTransitionTime":"2026-01-29T11:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.095624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.095679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.095694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.095714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.095727 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.198599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.198645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.198658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.198677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.198688 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.223983 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.223983 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:10 crc kubenswrapper[4766]: E0129 11:22:10.224125 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:10 crc kubenswrapper[4766]: E0129 11:22:10.224183 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.301219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.301274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.301288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.301379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.301425 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.404169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.404231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.404254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.404279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.404297 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.506842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.506895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.506907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.506927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.506938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.610202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.610242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.610253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.610270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.610279 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.713137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.713187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.713202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.713220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.713233 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.816469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.816521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.816531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.816548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.816559 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.918695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.918732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.918742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.918761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.918772 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:10Z","lastTransitionTime":"2026-01-29T11:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:10 crc kubenswrapper[4766]: I0129 11:22:10.993295 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:06:31.498078957 +0000 UTC Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.021361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.021425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.021438 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.021455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.021465 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.124118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.124165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.124175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.124196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.124207 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.224120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.224200 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:11 crc kubenswrapper[4766]: E0129 11:22:11.224323 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:11 crc kubenswrapper[4766]: E0129 11:22:11.224491 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.226292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.226333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.226350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.226370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.226390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.328551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.328624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.328637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.328654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.328666 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.431910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.431956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.431968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.431986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.431996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.535103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.535157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.535171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.535189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.535205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.637631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.637674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.637689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.637707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.637718 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.740789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.740846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.740864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.740886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.740899 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.843673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.843732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.843745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.843760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.843771 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.947202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.947584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.947683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.947851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.947958 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:11Z","lastTransitionTime":"2026-01-29T11:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:11 crc kubenswrapper[4766]: I0129 11:22:11.993762 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:19:16.535713413 +0000 UTC Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.050859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.051152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.051284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.051379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.051479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.154135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.154447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.154557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.154644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.154726 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.223628 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.223702 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:12 crc kubenswrapper[4766]: E0129 11:22:12.223770 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:12 crc kubenswrapper[4766]: E0129 11:22:12.223856 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.258017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.258057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.258068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.258092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.258106 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.361243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.361293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.361305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.361321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.361333 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.464800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.464882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.464898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.464922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.464938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.567618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.567695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.567713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.567737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.567752 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.569855 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.586044 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.602654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.617942 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.635518 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.650935 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.665436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.670260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.670300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.670314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.670335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.670350 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.680739 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.703167 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.716582 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.732360 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.747635 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.765537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.772873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.772912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.772921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.772935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.772944 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.781038 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.796576 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.811005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.828364 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:12Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.875199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.875231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.875240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.875256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.875265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.977991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.978126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.978190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.978209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.978223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:12Z","lastTransitionTime":"2026-01-29T11:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:12 crc kubenswrapper[4766]: I0129 11:22:12.994383 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:40:05.465455229 +0000 UTC Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.082195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.082272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.082286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.082304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.082314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.185688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.185738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.185750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.185769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.185781 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.223526 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.223666 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:13 crc kubenswrapper[4766]: E0129 11:22:13.223689 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:13 crc kubenswrapper[4766]: E0129 11:22:13.223827 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.288916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.289271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.289360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.289465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.289590 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.393467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.393824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.393907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.393998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.394091 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.497282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.497732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.497832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.497932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.498012 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.600476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.600821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.600910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.601010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.601105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.703648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.703960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.704064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.704133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.704198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.807617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.807655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.807666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.807685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.807700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.910372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.910420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.910429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.910446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.910456 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:13Z","lastTransitionTime":"2026-01-29T11:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:13 crc kubenswrapper[4766]: I0129 11:22:13.995490 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:21:55.248120087 +0000 UTC Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.012538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.012588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.012598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.012613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.012626 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.114999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.115042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.115056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.115090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.115103 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.217610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.217663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.217673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.217691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.217702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.223582 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.223614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:14 crc kubenswrapper[4766]: E0129 11:22:14.223699 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:14 crc kubenswrapper[4766]: E0129 11:22:14.223903 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.320094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.320159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.320172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.320195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.320208 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.422623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.422685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.422701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.422720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.422731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.525989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.526050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.526064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.526085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.526099 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.629660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.629732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.629756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.629779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.629791 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.733129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.733168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.733177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.733194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.733204 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.836506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.836571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.836582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.836605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.836617 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.939615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.939676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.939685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.939706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.939718 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:14Z","lastTransitionTime":"2026-01-29T11:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:14 crc kubenswrapper[4766]: I0129 11:22:14.996606 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:23:12.084211066 +0000 UTC Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.043801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.043865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.043879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.043899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.043912 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.146392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.146476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.146489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.146511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.146525 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.224000 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.224052 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:15 crc kubenswrapper[4766]: E0129 11:22:15.224141 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:15 crc kubenswrapper[4766]: E0129 11:22:15.224197 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.237597 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.249853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.249895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.249905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.249923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.249934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.250484 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.263299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.276518 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.289668 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.307799 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.332212 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48102e118ceddce358d9b6fcc9900a365130c5f1c75a08b393b337b6acd7e495\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"message\\\":\\\".EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:21:59.339393 5979 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:21:59.339975 5979 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 11:21:59.340022 5979 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 11:21:59.340046 5979 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 11:21:59.340054 5979 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:21:59.340075 5979 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 11:21:59.340083 5979 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 11:21:59.340136 5979 factory.go:656] Stopping watch factory\\\\nI0129 11:21:59.340160 5979 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:21:59.340166 5979 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 11:21:59.340190 5979 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 11:21:59.340194 5979 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 11:21:59.340207 5979 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 11:21:59.340216 5979 handler.go:208] Removed *v1.Pod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.347328 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.351999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.352049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.352071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.352092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.352104 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.364709 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.381868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.401313 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.416961 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.435130 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.448650 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.454519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.454568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.454582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.454601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.454616 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.464584 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.479541 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.557840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.557906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.557918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.557936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.557946 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.660110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.660168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.660182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.660204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.660217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.762381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.762438 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.762447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.762464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.762473 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.864347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.864402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.864437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.864459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.864470 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.967035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.967091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.967104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.967124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.967138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:15Z","lastTransitionTime":"2026-01-29T11:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:15 crc kubenswrapper[4766]: I0129 11:22:15.996800 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:04:57.157534911 +0000 UTC Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.070491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.070806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.070872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.070986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.071062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.174319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.174642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.174734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.174826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.174922 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.221158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.221200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.221215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.221234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.221246 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.223925 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.223968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.224043 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.224143 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.235259 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:16Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.239761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.239804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.239817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.239834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.239846 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.253806 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:16Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.258565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.258615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.258629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.258649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.258664 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.275629 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:16Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.280554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.280598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.280610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.280628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.280643 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.296034 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:16Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.301672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.301720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.301732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.301751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.301763 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.316368 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:16Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:16 crc kubenswrapper[4766]: E0129 11:22:16.316522 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.318363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.318399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.318425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.318446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.318456 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.421702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.422570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.422606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.422634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.422648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.525718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.525760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.525772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.525787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.525799 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.628785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.628823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.628834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.628853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.628865 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.732090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.732148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.732163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.732186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.732202 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.834896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.834949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.834961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.834978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.834992 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.937253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.937303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.937318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.937378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.937463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:16Z","lastTransitionTime":"2026-01-29T11:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:16 crc kubenswrapper[4766]: I0129 11:22:16.997196 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:38:06.827639397 +0000 UTC Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.040884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.040947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.040970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.040989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.041031 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.144019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.144073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.144082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.144098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.144107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.223663 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.223762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.223857 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.223941 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.247017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.247067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.247077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.247098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.247110 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.349432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.349485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.349500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.349520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.349531 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.451898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.451950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.451972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.451991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.452005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.554695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.554752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.554763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.554782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.554795 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.657635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.657689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.657702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.657721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.657733 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.750913 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.751070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751181 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:22:49.751134267 +0000 UTC m=+106.863527288 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751215 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751236 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751249 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751312 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:49.751296012 +0000 UTC m=+106.863689023 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.751358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.751449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.751493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751639 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751667 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751672 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751764 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:49.751731784 +0000 UTC m=+106.864124955 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751795 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:49.751783606 +0000 UTC m=+106.864176827 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.751695 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.752133 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:17 crc kubenswrapper[4766]: E0129 11:22:17.752200 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:49.752175927 +0000 UTC m=+106.864568938 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.760263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.760316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.760331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.760350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.760363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.862790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.862837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.862846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.862865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.862877 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.965322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.965357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.965365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.965382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.965396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:17Z","lastTransitionTime":"2026-01-29T11:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:17 crc kubenswrapper[4766]: I0129 11:22:17.997678 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:13:46.665875088 +0000 UTC Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.068738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.068778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.068789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.068807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.068818 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.171736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.171780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.171792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.171808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.171821 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.224161 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.224176 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:18 crc kubenswrapper[4766]: E0129 11:22:18.224302 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:18 crc kubenswrapper[4766]: E0129 11:22:18.224403 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.274811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.274859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.274871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.274893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.274905 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.377931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.377969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.377978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.377993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.378003 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.480724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.480781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.480792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.480814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.480830 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.583781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.583854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.583865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.583885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.583898 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.687105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.687160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.687173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.687196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.687210 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.790299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.790376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.790387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.790405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.790437 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.893023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.893072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.893088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.893107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.893130 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.995955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.995996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.996010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.996027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.996038 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:18Z","lastTransitionTime":"2026-01-29T11:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:18 crc kubenswrapper[4766]: I0129 11:22:18.998758 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:16:05.234852609 +0000 UTC Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.098845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.098900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.098913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.098938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.098955 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.201800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.202083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.202202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.202306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.202392 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.224273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:19 crc kubenswrapper[4766]: E0129 11:22:19.224460 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.224679 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:19 crc kubenswrapper[4766]: E0129 11:22:19.224856 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.225765 4766 scope.go:117] "RemoveContainer" containerID="30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.242187 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.261177 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.286636 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.303113 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.305243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.305278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.305290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.305309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.305322 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.317695 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.333310 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.346848 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.362566 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.379062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.394982 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.407837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.407872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.407883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.407903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.407915 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.410205 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.425273 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.438107 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.451688 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.465796 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.478935 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:19Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.511104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.511155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.511168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.511186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.511197 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.613822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.613880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.613891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.613906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.613915 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.716486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.716532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.716543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.716560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.716571 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.819974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.820038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.820058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.820101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.820138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.923103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.923150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.923167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.923188 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.923204 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:19Z","lastTransitionTime":"2026-01-29T11:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:19 crc kubenswrapper[4766]: I0129 11:22:19.999479 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:43:56.690278909 +0000 UTC Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.010242 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/1.log" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.014451 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.014891 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.025161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.025214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.025226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.025246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.025260 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.030344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.043537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.060351 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.082337 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.101598 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127019 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.127943 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.142011 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.156018 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.173242 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.192981 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.218061 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.223613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:20 crc kubenswrapper[4766]: E0129 11:22:20.223776 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.223804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:20 crc kubenswrapper[4766]: E0129 11:22:20.223896 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.234551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.234582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.234590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.234606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.234615 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.237209 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.250807 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.272192 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.288075 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.305223 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:20Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.337173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.337207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.337217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.337235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.337245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.439346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.439381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.439392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.439425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.439444 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.542840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.542877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.542887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.542905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.542916 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.645215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.645253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.645262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.645277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.645289 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.747581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.747618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.747629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.747645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.747656 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.849906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.849960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.849971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.849990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.850007 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.953456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.953509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.953519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.953536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:20 crc kubenswrapper[4766]: I0129 11:22:20.953546 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:20Z","lastTransitionTime":"2026-01-29T11:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.000680 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:39:28.090002869 +0000 UTC Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.056557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.056613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.056625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.056647 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.056663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.159116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.159172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.159185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.159204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.159218 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.223662 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.223661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:21 crc kubenswrapper[4766]: E0129 11:22:21.224054 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:21 crc kubenswrapper[4766]: E0129 11:22:21.223859 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.261876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.261948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.261959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.261981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.261996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.364871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.364915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.364932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.364949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.364961 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.467843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.467905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.467917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.467937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.467952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.571027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.571068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.571084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.571102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.571114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.674162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.674221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.674235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.674261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.674278 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.777053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.777099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.777111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.777130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.777147 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.879572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.879612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.879622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.879638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.879649 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.981943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.981988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.982001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.982020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:21 crc kubenswrapper[4766]: I0129 11:22:21.982032 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:21Z","lastTransitionTime":"2026-01-29T11:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.001498 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:20:29.891692428 +0000 UTC Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.023370 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/2.log" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.024935 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/1.log" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.029850 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de" exitCode=1 Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.029908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.029961 4766 scope.go:117] "RemoveContainer" containerID="30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.030817 4766 scope.go:117] "RemoveContainer" containerID="6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de" Jan 29 11:22:22 crc kubenswrapper[4766]: E0129 11:22:22.031038 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.044968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.062636 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.073729 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.084467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.084512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.084523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.084540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.084550 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.085566 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.102488 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.117868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.132472 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.145653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.163198 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.175547 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.188372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.188448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.188459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.188477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.188488 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.192348 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.207580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.221511 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.223680 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.223680 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:22 crc kubenswrapper[4766]: E0129 11:22:22.223823 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:22 crc kubenswrapper[4766]: E0129 11:22:22.223881 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.237092 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.252240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.266693 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:22Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.290769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.290807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.290816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.290832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.290846 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.393809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.393852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.393869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.393886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.393896 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.496316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.496350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.496359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.496373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.496382 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.599134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.599195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.599217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.599240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.599254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.702371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.702406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.702432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.702448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.702464 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.804505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.804549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.804559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.804580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.804594 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.907744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.907803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.907814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.907851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:22 crc kubenswrapper[4766]: I0129 11:22:22.907872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:22Z","lastTransitionTime":"2026-01-29T11:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.002166 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:05:05.667610404 +0000 UTC Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.014442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.014486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.014497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.014516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.014528 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.036059 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/2.log" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.117711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.117770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.117779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.117797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.117815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.220846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.220888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.220898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.220939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.220956 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.224356 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.224374 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:23 crc kubenswrapper[4766]: E0129 11:22:23.224500 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:23 crc kubenswrapper[4766]: E0129 11:22:23.224565 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.323913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.324007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.324025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.324063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.324078 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.427407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.427524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.427538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.427562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.427577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.530787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.530838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.530850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.530870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.530886 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.633462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.633523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.633538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.633555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.633565 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.737783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.737836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.737846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.737864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.737874 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.840450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.840494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.840505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.840521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.840531 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.943385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.943473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.943486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.943521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:23 crc kubenswrapper[4766]: I0129 11:22:23.943537 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:23Z","lastTransitionTime":"2026-01-29T11:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.003372 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:52:27.85809364 +0000 UTC Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.046311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.046396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.046408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.046457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.046471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.150442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.150522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.150542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.150568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.150595 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.224052 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.224104 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:24 crc kubenswrapper[4766]: E0129 11:22:24.224312 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:24 crc kubenswrapper[4766]: E0129 11:22:24.224526 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.256093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.256170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.256198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.256235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.256249 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.359761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.359822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.359835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.359856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.359872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.463310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.463755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.463876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.463958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.464022 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.568029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.568199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.568210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.568229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.568241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.670802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.670881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.670892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.670917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.670930 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.774203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.774562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.774676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.774754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.774902 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.878175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.878229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.878240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.878260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.878273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.981075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.981132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.981146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.981166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:24 crc kubenswrapper[4766]: I0129 11:22:24.981180 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:24Z","lastTransitionTime":"2026-01-29T11:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.004393 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:38:35.344430088 +0000 UTC Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.085274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.085364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.085381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.085403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.085454 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.188673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.188722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.188740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.188759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.188772 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.224229 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:25 crc kubenswrapper[4766]: E0129 11:22:25.224406 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.224827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:25 crc kubenswrapper[4766]: E0129 11:22:25.225035 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.239724 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.259154 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.284666 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30312fd30ac74239e62cdf1a45e32c1a527e55d48553bd060cecfbc2595660b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:04Z\\\",\\\"message\\\":\\\"4eea-8fa6-69b0534e5caa 0xc0074001eb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 11:22:04.290152 6248 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nF0129 11:22:04.290162 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network contr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.291999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.292044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.292054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.292071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.292080 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.304595 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.322329 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.342046 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:25 crc kubenswrapper[4766]: E0129 11:22:25.342262 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:25 crc kubenswrapper[4766]: E0129 11:22:25.342359 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:22:57.342332326 +0000 UTC m=+114.454725337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.342444 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.358104 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.381090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.395045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.395112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.395126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.395147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.395162 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.401517 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.422022 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.441249 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.458023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.476327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.493281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.497950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.498002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.498015 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.498034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.498047 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.508960 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.528279 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.600720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.600777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.600791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.600811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.600824 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.704591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.704654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.704668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.704690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.704705 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.808211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.808268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.808282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.808305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.808317 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.911300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.911348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.911360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.911384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:25 crc kubenswrapper[4766]: I0129 11:22:25.911398 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:25Z","lastTransitionTime":"2026-01-29T11:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.004927 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:27:48.223300136 +0000 UTC Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.014934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.014992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.015003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.015021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.015034 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.117709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.117750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.117761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.117778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.117790 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.220612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.220698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.220713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.220732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.220746 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.224028 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.224068 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.224187 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.224305 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.243089 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.324175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.324228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.324239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.324261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.324274 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.427748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.428043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.428062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.428082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.428096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.460120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.460883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.461000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.461030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.461045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.476966 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.482367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.482434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.482454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.482477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.482491 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.497674 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.502595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.502634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.502644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.502660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.502672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.519583 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.526230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.526280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.526291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.526309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.526319 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.541638 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.546604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.546652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.546663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.546682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.546694 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.565145 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:26 crc kubenswrapper[4766]: E0129 11:22:26.565309 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.567892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.567937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.567947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.567966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.567983 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.670867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.671318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.671429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.671523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.671607 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.774803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.774884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.774897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.774917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.774930 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.877841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.877925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.877945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.877970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.877989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.981687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.981729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.981778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.981799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:26 crc kubenswrapper[4766]: I0129 11:22:26.981811 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:26Z","lastTransitionTime":"2026-01-29T11:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.005664 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:25:43.865151836 +0000 UTC Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.084866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.084909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.084921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.084939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.084952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.188531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.188606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.188621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.188642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.188656 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.224555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.224554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:27 crc kubenswrapper[4766]: E0129 11:22:27.224768 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:27 crc kubenswrapper[4766]: E0129 11:22:27.224882 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.291593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.291659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.291675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.291695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.291707 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.395248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.395308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.395325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.395348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.395362 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.498449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.498506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.498519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.498534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.498543 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.601016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.601061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.601074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.601094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.601106 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.704502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.704847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.704957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.705089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.705343 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.810047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.810436 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.810521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.810599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.810675 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.913523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.913584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.913601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.913623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:27 crc kubenswrapper[4766]: I0129 11:22:27.913636 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:27Z","lastTransitionTime":"2026-01-29T11:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.006942 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:32:47.637236928 +0000 UTC Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.018189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.018251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.018265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.018290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.018304 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.121511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.121563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.121575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.121601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.121613 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.223565 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.223628 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:28 crc kubenswrapper[4766]: E0129 11:22:28.223731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:28 crc kubenswrapper[4766]: E0129 11:22:28.223954 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.224430 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.224460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.224473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.224488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.224499 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.326969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.327027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.327040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.327058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.327069 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.430086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.430149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.430161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.430191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.430205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.532711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.532771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.532784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.532799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.532809 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.635936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.635991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.636000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.636019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.636030 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.739077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.739167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.739184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.739208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.739222 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.842299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.842368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.842380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.842402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.842430 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.947667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.947714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.947724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.947741 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:28 crc kubenswrapper[4766]: I0129 11:22:28.947752 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:28Z","lastTransitionTime":"2026-01-29T11:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.008146 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:42:05.002033933 +0000 UTC Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.050592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.050635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.050645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.050660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.050669 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.153170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.153233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.153251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.153275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.153290 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.223460 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.223560 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:29 crc kubenswrapper[4766]: E0129 11:22:29.223711 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:29 crc kubenswrapper[4766]: E0129 11:22:29.223836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.257368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.257455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.257477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.257501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.257525 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.360817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.360881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.360893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.360912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.360925 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.465010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.465063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.465075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.465098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.465112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.568600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.568666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.568679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.568698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.568711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.672654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.672713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.672725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.672747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.672765 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.776242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.776298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.776310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.776325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.776709 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.879627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.879688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.879702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.879724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.879737 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.983452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.983503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.983514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.983532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:29 crc kubenswrapper[4766]: I0129 11:22:29.983543 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:29Z","lastTransitionTime":"2026-01-29T11:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.008377 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:43:09.317780113 +0000 UTC Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.086803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.086864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.086876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.086895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.086909 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.190459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.190497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.190508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.190529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.190541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.224198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.224363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:30 crc kubenswrapper[4766]: E0129 11:22:30.224456 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:30 crc kubenswrapper[4766]: E0129 11:22:30.224592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.293497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.293565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.293579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.293604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.293620 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.397347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.397405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.397438 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.397464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.397477 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.500970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.501020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.501032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.501052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.501068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.604186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.604231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.604260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.604276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.604324 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.707798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.707856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.707875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.707895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.707907 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.810585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.810630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.810641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.810659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.810672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.912700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.912771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.912780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.912796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:30 crc kubenswrapper[4766]: I0129 11:22:30.912829 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:30Z","lastTransitionTime":"2026-01-29T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.009097 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:47:06.153547287 +0000 UTC Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.015466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.015565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.015581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.015603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.015622 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.118694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.118756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.118774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.118796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.118808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.222902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.222952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.222968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.222984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.222995 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.223651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.223683 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:31 crc kubenswrapper[4766]: E0129 11:22:31.223833 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:31 crc kubenswrapper[4766]: E0129 11:22:31.224081 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.328088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.328162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.328181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.328207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.328224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.431919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.431969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.431982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.432005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.432024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.535230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.535315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.535329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.535349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.535361 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.638356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.638400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.638428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.638443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.638451 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.741482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.741534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.741547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.741563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.741577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.844531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.844588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.844597 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.844616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.844627 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.947474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.947520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.947530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.947549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:31 crc kubenswrapper[4766]: I0129 11:22:31.947562 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:31Z","lastTransitionTime":"2026-01-29T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.010141 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:55:32.357472576 +0000 UTC Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.050474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.050531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.050545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.050567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.050581 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.154446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.154533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.154551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.154578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.154594 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.223512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:32 crc kubenswrapper[4766]: E0129 11:22:32.223694 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.224037 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:32 crc kubenswrapper[4766]: E0129 11:22:32.224292 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.257758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.257808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.257823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.257840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.257851 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.360499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.360544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.360556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.360574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.360587 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.465157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.465215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.465226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.465249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.465265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.568204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.568248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.568261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.568281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.568295 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.672611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.672665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.672677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.672697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.672711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.776892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.776964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.776973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.776991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.777003 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.879477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.879537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.879553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.879575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.879587 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.982209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.982286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.982304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.982326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:32 crc kubenswrapper[4766]: I0129 11:22:32.982341 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:32Z","lastTransitionTime":"2026-01-29T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.010682 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:47:44.142900081 +0000 UTC Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.085329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.085400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.085451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.085475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.085492 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.188628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.188679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.188691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.188709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.188721 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.224551 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.224657 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:33 crc kubenswrapper[4766]: E0129 11:22:33.224777 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:33 crc kubenswrapper[4766]: E0129 11:22:33.224888 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.291976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.292036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.292049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.292068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.292081 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.394951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.394993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.395006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.395025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.395039 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.497886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.497934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.497948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.497965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.497982 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.600600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.600656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.600665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.600683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.600695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.704285 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.704346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.704358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.704380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.704394 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.807152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.807210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.807222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.807242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.807256 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.909841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.909897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.909907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.909925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:33 crc kubenswrapper[4766]: I0129 11:22:33.909935 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:33Z","lastTransitionTime":"2026-01-29T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.010920 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:59:56.321140292 +0000 UTC Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.012811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.012852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.012861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.012879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.012890 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.116454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.116502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.116511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.116528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.116540 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.219233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.219273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.219282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.219313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.219325 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.223688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.223711 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:34 crc kubenswrapper[4766]: E0129 11:22:34.223792 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:34 crc kubenswrapper[4766]: E0129 11:22:34.223884 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.321821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.321869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.321878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.321894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.321905 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.424149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.424209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.424220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.424237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.424254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.530613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.530956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.531162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.531190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.531207 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.634324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.634377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.634389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.634435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.634453 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.737261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.737313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.737325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.737344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.737356 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.840287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.840346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.840359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.840377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.840390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.943296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.943349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.943360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.943380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:34 crc kubenswrapper[4766]: I0129 11:22:34.943394 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:34Z","lastTransitionTime":"2026-01-29T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.012142 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:15:01.176816341 +0000 UTC Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.048401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.048530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.048548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.048569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.048583 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.151804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.151862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.151877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.151904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.151917 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.224018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:35 crc kubenswrapper[4766]: E0129 11:22:35.224179 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.224666 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:35 crc kubenswrapper[4766]: E0129 11:22:35.224773 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.224982 4766 scope.go:117] "RemoveContainer" containerID="6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de" Jan 29 11:22:35 crc kubenswrapper[4766]: E0129 11:22:35.225170 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.241591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.255361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.255451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.255466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.255485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.255497 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.262965 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.292735 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.309586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.331117 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.352099 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.358161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.358251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.358262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.358280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.358291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.372292 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.393007 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.412752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.434074 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.455225 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.460805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.460954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.460970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.460988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.461001 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.472274 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.490673 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.510231 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.527811 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.543235 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.562886 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.564029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.564113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.564130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.564146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.564158 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.578832 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.593617 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.612327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.639537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.659551 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.666353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.666401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.666429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.666449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.666461 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.677210 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.693901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.711010 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.725403 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.745386 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.764866 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.768636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.768680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.768689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.768706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.768718 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.783706 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.800380 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.817781 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.843639 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.862530 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.871268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.872025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.872141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.872168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.872182 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.884479 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.975882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.975957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.975971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.975994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:35 crc kubenswrapper[4766]: I0129 11:22:35.976008 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:35Z","lastTransitionTime":"2026-01-29T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.013154 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:51:20.957666348 +0000 UTC Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.078970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.079021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.079036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.079054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.079066 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.183008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.183101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.183115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.183160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.183177 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.223843 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.223987 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:36 crc kubenswrapper[4766]: E0129 11:22:36.224064 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:36 crc kubenswrapper[4766]: E0129 11:22:36.224206 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.286369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.286454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.286469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.286487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.286500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.389462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.389528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.389544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.389566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.389582 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.493108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.493173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.493184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.493204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.493220 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.596237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.596338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.596354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.596382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.596402 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.699605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.699674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.699686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.699707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.699720 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.803115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.803169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.803182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.803205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.803217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.906843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.906912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.906926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.906949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.906964 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.926578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.926625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.926636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.926663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.926683 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: E0129 11:22:36.941579 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.946964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.947029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.947039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.947075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.947088 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: E0129 11:22:36.962860 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.967878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.967936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.967947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.967973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.967995 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:36 crc kubenswrapper[4766]: E0129 11:22:36.985973 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.995275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.995327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.995340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.995360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:36 crc kubenswrapper[4766]: I0129 11:22:36.995372 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:36Z","lastTransitionTime":"2026-01-29T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.013520 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:58:56.389403502 +0000 UTC Jan 29 11:22:37 crc kubenswrapper[4766]: E0129 11:22:37.015074 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.020547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.020615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.020628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.020648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.020661 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: E0129 11:22:37.035299 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:37 crc kubenswrapper[4766]: E0129 11:22:37.035504 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.038297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.038362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.038377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.038398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.038463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.141840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.141917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.141930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.141953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.141969 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.224125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:37 crc kubenswrapper[4766]: E0129 11:22:37.224313 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.224558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:37 crc kubenswrapper[4766]: E0129 11:22:37.224631 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.244919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.245254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.245351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.245446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.245561 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.349319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.349372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.349389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.349440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.349459 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.452483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.452533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.452542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.452558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.452570 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.556994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.557038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.557050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.557066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.557079 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.660805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.660923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.660939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.660968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.660995 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.764458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.764509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.764521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.764544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.764557 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.869074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.869151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.869166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.869189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.869205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.972154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.972202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.972216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.972240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:37 crc kubenswrapper[4766]: I0129 11:22:37.972254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:37Z","lastTransitionTime":"2026-01-29T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.014144 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:14:08.278289866 +0000 UTC Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.075633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.075721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.075734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.075759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.075775 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.179501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.179849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.179917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.180001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.180061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.224492 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.224558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:38 crc kubenswrapper[4766]: E0129 11:22:38.224676 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:38 crc kubenswrapper[4766]: E0129 11:22:38.224731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.283903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.284250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.284394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.284574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.284671 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.387881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.387931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.387945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.387965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.387977 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.490937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.490976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.490986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.491003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.491015 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.593791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.593855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.593867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.593888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.593901 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.696433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.696486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.696498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.696514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.696525 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.799332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.799488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.799508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.799532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.799552 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.902322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.902377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.902391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.902439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:38 crc kubenswrapper[4766]: I0129 11:22:38.902455 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:38Z","lastTransitionTime":"2026-01-29T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.005391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.005480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.005493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.005516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.005541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.014342 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:13:57.951104383 +0000 UTC Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.108744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.108796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.108805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.108822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.108835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.211734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.211783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.211796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.211818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.211833 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.224688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.224688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:39 crc kubenswrapper[4766]: E0129 11:22:39.224911 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:39 crc kubenswrapper[4766]: E0129 11:22:39.224994 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.314775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.314833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.314842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.314861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.314874 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.418288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.418353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.418363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.418379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.418390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.521861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.521905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.521915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.521933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.521945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.625259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.625314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.625324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.625345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.625358 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.728342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.728442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.728454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.728473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.728483 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.831354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.831396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.831405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.831481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.831493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.934813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.934860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.934872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.934890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:39 crc kubenswrapper[4766]: I0129 11:22:39.934900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:39Z","lastTransitionTime":"2026-01-29T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.014997 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:04:23.604613038 +0000 UTC Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.038040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.038107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.038121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.038144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.038166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.106856 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/0.log" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.106928 4766 generic.go:334] "Generic (PLEG): container finished" podID="6986483f-6521-45da-9034-8576037c32ad" containerID="a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36" exitCode=1 Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.106975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerDied","Data":"a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.107488 4766 scope.go:117] "RemoveContainer" containerID="a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.124572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.141247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.141762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.141775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.141797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.141814 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.143099 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.159620 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.178435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.197168 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.210781 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.223741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.223778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:40 crc kubenswrapper[4766]: E0129 11:22:40.223931 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:40 crc kubenswrapper[4766]: E0129 11:22:40.224034 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.229838 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.246754 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.263224 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.279565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.294768 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.312793 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.335491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.349781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.349851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.349866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.349887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.349899 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.355758 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.377780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.391835 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.409699 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.453636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.453703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.453718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.453738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.453752 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.560252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.560303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.560314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.560332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.560344 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.663390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.664996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.665829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.665856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.665869 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.769877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.769964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.769980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.770006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.770020 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.873552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.873613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.873625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.873647 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.873660 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.976819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.976883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.976897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.976920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:40 crc kubenswrapper[4766]: I0129 11:22:40.976934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:40Z","lastTransitionTime":"2026-01-29T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.016166 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:47:42.154526208 +0000 UTC Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.080161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.080217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.080233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.080253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.080269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.117296 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/0.log" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.117393 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerStarted","Data":"f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.136027 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.162204 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.178184 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.183602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.183639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.183653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.183674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.183686 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.199189 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.214781 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.224022 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.224039 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:41 crc kubenswrapper[4766]: E0129 11:22:41.224286 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:41 crc kubenswrapper[4766]: E0129 11:22:41.224399 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.233330 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.250725 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.268302 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287456 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.287672 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.303024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.319436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.332795 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.352313 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.369806 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.386902 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.390310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.390384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.390401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.390465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.390486 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.403214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.420715 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.493949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.494009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.494022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.494046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.494061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.597333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.597452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.597474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.597498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.597513 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.701583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.701632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.701645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.701666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.701679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.805192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.805236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.805245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.805265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.805276 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.909839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.909923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.909941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.909962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:41 crc kubenswrapper[4766]: I0129 11:22:41.909975 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:41Z","lastTransitionTime":"2026-01-29T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.013716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.013778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.013793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.013818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.013831 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.016462 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:42:35.80926022 +0000 UTC Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.116616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.116664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.116704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.116724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.116738 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.219347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.219436 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.219449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.219473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.219489 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.224118 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.224122 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:42 crc kubenswrapper[4766]: E0129 11:22:42.224283 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:42 crc kubenswrapper[4766]: E0129 11:22:42.224452 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.323551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.323615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.323632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.323657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.323671 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.427197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.427274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.427286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.427303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.427313 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.531907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.531981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.532001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.532024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.532037 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.641622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.641751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.641781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.641806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.641824 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.744616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.745099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.745187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.745288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.745581 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.848576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.850046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.850098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.850128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.850144 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.952799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.952853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.952866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.952885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:42 crc kubenswrapper[4766]: I0129 11:22:42.952896 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:42Z","lastTransitionTime":"2026-01-29T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.017342 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:48:11.686981668 +0000 UTC Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.056381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.056476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.056491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.056511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.056526 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.160448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.160510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.160524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.160553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.160572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.223524 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.223569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:43 crc kubenswrapper[4766]: E0129 11:22:43.223708 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:43 crc kubenswrapper[4766]: E0129 11:22:43.223855 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.263265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.263318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.263330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.263347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.263358 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.366530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.366993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.367107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.367227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.367330 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.471355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.471400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.471447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.471465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.471479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.574524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.574620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.574660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.574687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.574701 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.678216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.678277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.678290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.678323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.678342 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.781128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.781193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.781211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.781239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.781256 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.884544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.884612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.884624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.884646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.884659 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.988181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.988239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.988261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.988281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:43 crc kubenswrapper[4766]: I0129 11:22:43.988381 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:43Z","lastTransitionTime":"2026-01-29T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.018191 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:51:36.670974017 +0000 UTC Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.091980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.092027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.092038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.092057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.092075 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.194782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.194843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.194860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.194882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.194901 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.223679 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.223718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:44 crc kubenswrapper[4766]: E0129 11:22:44.223929 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:44 crc kubenswrapper[4766]: E0129 11:22:44.224063 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.298200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.298265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.298277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.298299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.298315 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.402295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.402358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.402374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.402400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.402444 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.505139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.505211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.505222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.505241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.505253 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.608476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.608517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.608528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.608547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.608559 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.711907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.711969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.711979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.711998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.712010 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.815183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.815234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.815246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.815265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.815281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.921857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.921954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.921971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.922014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:44 crc kubenswrapper[4766]: I0129 11:22:44.922028 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:44Z","lastTransitionTime":"2026-01-29T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.019054 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:41:10.117690356 +0000 UTC Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.025733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.025787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.025803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.025822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.025835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.129459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.129500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.129514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.129532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.129545 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.224506 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.224670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:45 crc kubenswrapper[4766]: E0129 11:22:45.224759 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:45 crc kubenswrapper[4766]: E0129 11:22:45.224911 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.230997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.231046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.231056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.231071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.231082 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.239936 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.252714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.271558 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.296158 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.310733 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.324109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.333342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.333394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.333429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.333460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.333478 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.339895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.351256 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.365252 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.379013 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.393373 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.409240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.422984 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.435777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.435819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.435828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.435844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.435857 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.436819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.451789 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.468007 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.490303 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.540073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.540152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.540174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.540205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.540229 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.642986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.643033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.643067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.643085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.643095 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.746539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.746608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.746626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.746651 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.746671 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.850290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.850346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.850369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.850396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.850442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.954027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.954091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.954107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.954131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:45 crc kubenswrapper[4766]: I0129 11:22:45.954145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:45Z","lastTransitionTime":"2026-01-29T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.019624 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:14:19.307622181 +0000 UTC Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.057374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.057477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.057490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.057508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.057521 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.161094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.161157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.161171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.161191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.161204 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.224313 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.224434 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:46 crc kubenswrapper[4766]: E0129 11:22:46.224528 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:46 crc kubenswrapper[4766]: E0129 11:22:46.224654 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.264234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.264286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.264296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.264315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.264325 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.367304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.367378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.367393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.367445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.367466 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.469887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.469928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.469940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.469958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.469973 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.572174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.572224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.572237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.572255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.572267 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.675382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.675492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.675508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.675531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.675545 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.778819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.778886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.778907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.778928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.778947 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.882024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.882075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.882112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.882133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.882147 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.985956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.986044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.986054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.986073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:46 crc kubenswrapper[4766]: I0129 11:22:46.986102 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:46Z","lastTransitionTime":"2026-01-29T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.020224 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:44:17.110296537 +0000 UTC Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.089257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.089344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.089360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.089404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.089439 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.131036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.131193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.131554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.131589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.131606 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.150870 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.156776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.156838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.156854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.156873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.156886 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.173309 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.178635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.178698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.178710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.178733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.178745 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.196796 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.201860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.201921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.201932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.201954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.201967 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.217579 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.223778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.223895 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.223911 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.224917 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.224979 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.224722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.225051 4766 scope.go:117] "RemoveContainer" containerID="6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.225113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.225156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.225173 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.245468 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:47 crc kubenswrapper[4766]: E0129 11:22:47.245671 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.248128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.248214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.248231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.248255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.248269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.351489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.351542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.351553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.351576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.351588 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.453744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.453787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.453797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.453815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.453826 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.556745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.556803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.556816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.556838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.556854 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.659612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.659671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.659681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.659697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.659707 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.768530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.768573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.768585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.768603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.768618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.871331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.871457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.871478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.871496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.871509 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.974389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.974461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.974477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.974493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:47 crc kubenswrapper[4766]: I0129 11:22:47.974504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:47Z","lastTransitionTime":"2026-01-29T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.021065 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:06:05.382188369 +0000 UTC Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.078603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.078661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.078687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.078705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.078719 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.152644 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/2.log" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.159390 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.160830 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.181811 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.182096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.182117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.182127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.182144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.182157 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.201708 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.222592 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.223672 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.223814 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:48 crc kubenswrapper[4766]: E0129 11:22:48.223858 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:48 crc kubenswrapper[4766]: E0129 11:22:48.223906 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.241855 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.257142 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.276809 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.285648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.285706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.285718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.285751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.285764 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.293694 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.315938 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.333550 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.355928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.374124 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.389319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.389367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.389382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.389404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.389446 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.394770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.418914 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.455479 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.483056 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.493091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.493172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.493192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.493213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.493228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.504398 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.525352 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.596972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.597032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.597049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.597072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.597087 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.699705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.700517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.700549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.700583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.700601 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.803767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.803827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.803840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.803862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.803876 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.907352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.907460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.907473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.907496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:48 crc kubenswrapper[4766]: I0129 11:22:48.907509 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:48Z","lastTransitionTime":"2026-01-29T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.009846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.009881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.009890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.009908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.009920 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.021719 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:45:01.52635648 +0000 UTC Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.113874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.113913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.113922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.113942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.113952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.165201 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/3.log" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.165938 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/2.log" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.169794 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" exitCode=1 Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.169849 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.169898 4766 scope.go:117] "RemoveContainer" containerID="6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.170801 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.171016 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.190586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.210108 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.217469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.217515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.217525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.217545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.217556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.223951 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.224194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.224460 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.224564 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.229356 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.238856 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.241699 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.247384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.265780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.283206 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.304464 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.320487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.320546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.320559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.320579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.320591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.332194 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7da52dee9195e28eb49f30ee6a516c5b3f129154f1f1cee810f044f96bb4de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:21Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757122 6589 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 11:22:20.757177 6589 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757739 6589 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:20.757883 6589 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:20.757904 6589 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:20.758371 6589 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:20.758459 6589 factory.go:656] Stopping watch factory\\\\nI0129 11:22:20.758463 6589 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:20.758481 6589 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:20.758515 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:20.758594 6589 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:48Z\\\",\\\"message\\\":\\\" 6910 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418328 6910 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418506 6910 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.419402 6910 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.435063 6910 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 11:22:48.435160 6910 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:48.435242 6910 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 11:22:48.435286 6910 factory.go:656] Stopping watch factory\\\\nI0129 11:22:48.435307 6910 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:48.473540 6910 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:48.473593 6910 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:48.473691 6910 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:48.473730 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:48.473823 6910 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.350171 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.370631 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.390868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.410375 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.425010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.425108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.425126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.425153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.425171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.430323 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.448379 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.467598 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.483309 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.502747 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.529486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.529549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.529566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.529586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.529596 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.633537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.633596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.633608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.633628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.633642 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.736962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.736998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.737010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.737031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.737044 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.839308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.839359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.839372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.839391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.839404 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.845284 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845502 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.845398428 +0000 UTC m=+170.957791439 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.845548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.845597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.845631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.845663 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845763 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845786 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845805 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845818 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845824 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.84581104 +0000 UTC m=+170.958204051 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845857 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.845847021 +0000 UTC m=+170.958240032 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.845899 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.846003 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.846076 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.846100 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.846275 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.846013036 +0000 UTC m=+170.958406227 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:22:49 crc kubenswrapper[4766]: E0129 11:22:49.846705 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.846675435 +0000 UTC m=+170.959068466 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.942965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.943028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.943048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.943072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:49 crc kubenswrapper[4766]: I0129 11:22:49.943087 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:49Z","lastTransitionTime":"2026-01-29T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.022120 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:07:44.701722214 +0000 UTC Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.046487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.046585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.046599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.046624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.046639 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.149917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.150003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.150117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.150144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.150159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.176954 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/3.log" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.181295 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:22:50 crc kubenswrapper[4766]: E0129 11:22:50.181576 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.199537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c78d80-661d-4839-a90d-3e9a137c590b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://509f5e01bea7149b8c69f416c9d88c388d3db3e6300254e1d58b167629183dfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d545b1c015854aae81ddf385c118593789397a7f62077baaf1261ddda6b81fad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d4b9058cea53335860f66fdf06820202660275143325c3dc5b813df1d60818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.218990 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.223536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.223706 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:50 crc kubenswrapper[4766]: E0129 11:22:50.223790 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:50 crc kubenswrapper[4766]: E0129 11:22:50.223870 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.237770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.253185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.253249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.253264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.253283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.253295 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.258433 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.284666 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:48Z\\\",\\\"message\\\":\\\" 6910 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418328 6910 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418506 6910 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.419402 6910 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.435063 6910 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 11:22:48.435160 6910 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:48.435242 6910 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 11:22:48.435286 6910 factory.go:656] Stopping watch factory\\\\nI0129 11:22:48.435307 6910 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:48.473540 6910 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:48.473593 6910 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:48.473691 6910 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:48.473730 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:48.473823 6910 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.302337 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.322035 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.341481 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.356987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.357036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.357048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.357065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.357078 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.368328 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad5e6aa-608c-4f11-be50-ab47da7c3d32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9057e7dacac5ef2dd405ea124359e5bc143025ab45ad29f20d5f6c16da236b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d02adb96cd77bb10d186e4a9d47ea85ec282480dd0cfd5ef108274fc6b74d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab2524a59d6f3504907bae7dae0f390e8326b9490441dbee277bc0a44d8c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bd3ed7fe3334bb28ec0880e5a9afc307d112e4a801744891faf2c28710a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be35ac9ff26d4e33294cd586455634fa2e2f070b3b9c39f1b02cc683e2fdc7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.389296 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.404778 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.422749 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.441861 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.459218 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.459985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.460027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.460038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.460056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.460111 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.478265 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.499684 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.521155 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.536904 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.556102 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.563111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.563140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.563148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.563163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.563172 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.666299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.666354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.666370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.666389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.666403 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.769510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.769575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.769590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.769610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.769624 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.873386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.873504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.873520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.873541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.873557 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.976910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.976974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.976988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.977011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:50 crc kubenswrapper[4766]: I0129 11:22:50.977026 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:50Z","lastTransitionTime":"2026-01-29T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.022671 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:55:52.614119375 +0000 UTC Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.080059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.080104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.080118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.080140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.080163 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.183151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.183206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.183218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.183237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.183251 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.223629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.223685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:51 crc kubenswrapper[4766]: E0129 11:22:51.223880 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:51 crc kubenswrapper[4766]: E0129 11:22:51.223997 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.287057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.287553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.287698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.287834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.287933 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.391188 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.391244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.391261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.391286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.391300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.494464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.494541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.494556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.494578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.494594 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.596993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.597045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.597054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.597072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.597083 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.699400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.699549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.699576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.699613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.699634 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.802250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.802308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.802321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.802342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.802353 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.905521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.905574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.905585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.905604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:51 crc kubenswrapper[4766]: I0129 11:22:51.905619 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:51Z","lastTransitionTime":"2026-01-29T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.009434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.009493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.009505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.009526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.009541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.022861 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:22:17.706653692 +0000 UTC Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.112966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.113028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.113045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.113072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.113085 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.216088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.216154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.216168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.216190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.216204 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.224427 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:52 crc kubenswrapper[4766]: E0129 11:22:52.224599 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.224446 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:52 crc kubenswrapper[4766]: E0129 11:22:52.224856 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.318907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.318972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.318984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.319004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.319016 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.422072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.422120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.422132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.422152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.422166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.525324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.525389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.525403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.525446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.525467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.629686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.629768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.629791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.629823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.629851 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.734088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.734167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.734179 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.734200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.734213 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.837360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.837433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.837449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.837473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.837486 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.940840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.940903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.940921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.940943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:52 crc kubenswrapper[4766]: I0129 11:22:52.940960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:52Z","lastTransitionTime":"2026-01-29T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.023790 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:14:52.576629119 +0000 UTC Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.044462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.044512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.044525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.044543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.044556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.147918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.147966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.147977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.148001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.148013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.224676 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.224788 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:53 crc kubenswrapper[4766]: E0129 11:22:53.224898 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:53 crc kubenswrapper[4766]: E0129 11:22:53.225004 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.251109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.251158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.251170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.251190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.251202 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.355769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.356221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.356274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.356303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.356318 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.459630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.459706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.459719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.459738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.459750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.563290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.563345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.563356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.563379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.563392 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.666664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.666703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.666711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.666728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.666739 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.769545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.769601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.769616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.769639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.769654 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.873157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.873236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.873253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.873275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.873289 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.976980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.977087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.977100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.977122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:53 crc kubenswrapper[4766]: I0129 11:22:53.977137 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:53Z","lastTransitionTime":"2026-01-29T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.024265 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:47:21.23128184 +0000 UTC Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.079988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.080053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.080066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.080086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.080099 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.183402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.183482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.183494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.183525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.183540 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.223484 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.223505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:54 crc kubenswrapper[4766]: E0129 11:22:54.223639 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:54 crc kubenswrapper[4766]: E0129 11:22:54.223774 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.286461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.286535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.286551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.286894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.286935 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.390848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.390886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.390902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.390919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.390933 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.494372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.494472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.494485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.494506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.494528 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.597237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.597297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.597316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.597338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.597350 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.700757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.700821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.700834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.700854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.700868 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.803466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.803841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.803991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.804107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.804197 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.907302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.907807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.907892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.908163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:54 crc kubenswrapper[4766]: I0129 11:22:54.908252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:54Z","lastTransitionTime":"2026-01-29T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.012369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.012445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.012462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.012481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.012497 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.024580 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:24:09.533315157 +0000 UTC Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.115838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.115889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.115901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.115921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.115936 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.218699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.218752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.218764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.218780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.218790 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.224255 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.224386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:55 crc kubenswrapper[4766]: E0129 11:22:55.224588 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:55 crc kubenswrapper[4766]: E0129 11:22:55.224760 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.247872 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad5e6aa-608c-4f11-be50-ab47da7c3d32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9057e7dacac5ef2dd405ea124359e5bc143025ab45ad29f20d5f6c16da236b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d02adb96cd77bb10d186e4a9d47ea85ec282480dd0cfd5ef108274fc6b74d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab2524a59d6f3504907bae7dae0f390e8326b9490441dbee277bc0a44d8c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bd3ed7fe3334bb28ec0880e5a9afc307d112e4a801744891faf2c28710a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be35ac9ff26d4e33294cd586455634fa2e2f070b3b9c39f1b02cc683e2fdc7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89136d4c4f8fb5bba2c61dbdeeb8d207b694025da3d0b305163ca6d237a5c749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c57a93549ba2188c3e3b8944e05cbc29caeddc0eb3f54f8bd4f019224a9bb82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c79346d5e42839cf96932f256383d9d926ddb9eb74b6959195bdc3502f6224b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.265764 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.292665 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.319067 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.322486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.322542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.322557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.322578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.322590 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.336846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e195676f45e707d0db5eec3c8922f03010412ac23081a16cbf04b29fb5698908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc72be68c22754da281e89fe9cd0b016a78feb34b8f9053dd0a28020bb733016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.353734 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009587c0-701e-4765-bd10-2ba52a2a9016\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd45aa37a17c5cd3d79ef58b09a6e77ed413e4535ea0597922cd0425e23cb2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ft7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.374808 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5dc50cb-2d41-45cd-8a3d-615212a20120\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 11:21:38.187211 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 11:21:38.187475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 11:21:38.188924 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-858855041/tls.crt::/tmp/serving-cert-858855041/tls.key\\\\\\\"\\\\nI0129 11:21:38.443648 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 11:21:38.447463 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 11:21:38.447603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 11:21:38.447664 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 11:21:38.447692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 11:21:38.471406 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 11:21:38.471454 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 11:21:38.471479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 11:21:38.471483 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 11:21:38.471487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 11:21:38.471491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 11:21:38.471436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 11:21:38.475175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.391465 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45c5025-5014-4cda-b09c-b8fe58daa0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c3e4b23de55df1e7416d9834c594e6b8baa72850428481ae9589ac2e3a2848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6af6b65be19d42cb0398dd814bea1497dd7a258533b34d84a55aafe3997a422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://368e7d2846989301de5391a33bce19ec278b8a597dad4b565340a9102cb0ca8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2953606dd84fc8b415bb9b1f4a2b35c8d927dfcdf449b8246096b9d7ac0c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.410215 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39913c76af43bc679114472f98a7710e422170785d0f9d3159f0cfd9f07df7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6xqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npgg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.426077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.426130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.426172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.426194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.426206 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.428471 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09a0f18f505a083c61d38bf1002431b5e7ccee8f59f0027b32e7234f017165d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.445900 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b907fc44-f3fb-43b4-86e2-60d1379c3b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0b3d989d7372fff1ec80dcf86e75ad52c0ef6b9bb86df95de8dfc1389974d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9b39c90c20f0460ddc8661ffd383da54fdd6f27265dfb21018762e460435f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8p4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dc6zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.462884 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20dd6698-d285-4d33-b108-af2e963a6230\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627f1cbde0bcbdc735a292c896c151e796db5038d619da66cc9d97c9e94a5721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340091929d2db093c111ffe69890053b76766a605522ff9ce5ee2d307430a47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.482309 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d281584a5014a8a55b6484802ff5756c35f3fcbb2ca3f65bd1184e77c59a243b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.499728 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c78d80-661d-4839-a90d-3e9a137c590b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://509f5e01bea7149b8c69f416c9d88c388d3db3e6300254e1d58b167629183dfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d545b1c015854aae81ddf385c118593789397a7f62077baaf1261ddda6b81fad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d4b9058cea53335860f66fdf06820202660275143325c3dc5b813df1d60818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.519703 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnk2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6986483f-6521-45da-9034-8576037c32ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:39Z\\\",\\\"message\\\":\\\"2026-01-29T11:21:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d\\\\n2026-01-29T11:21:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_914548d4-a627-4d59-bc6c-658d0536ad2d to /host/opt/cni/bin/\\\\n2026-01-29T11:21:54Z [verbose] multus-daemon started\\\\n2026-01-29T11:21:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:22:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kk27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnk2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.529787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.529852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.529863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.529880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.529896 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.543361 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98622e63-ce1a-413d-8a0a-32610d52ab94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:22:48Z\\\",\\\"message\\\":\\\" 6910 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418328 6910 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.418506 6910 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.419402 6910 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 11:22:48.435063 6910 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 11:22:48.435160 6910 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 11:22:48.435242 6910 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 11:22:48.435286 6910 factory.go:656] Stopping watch factory\\\\nI0129 11:22:48.435307 6910 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 11:22:48.473540 6910 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 11:22:48.473593 6910 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 11:22:48.473691 6910 ovnkube.go:599] Stopped ovnkube\\\\nI0129 11:22:48.473730 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 11:22:48.473823 6910 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xk98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zn4kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.559948 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3910984a-a754-462f-9414-183a50bb78b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mcwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.575127 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vppxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ce22607-a7fc-47f9-8d18-a8ef1351916c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9a6804e40352e3488ebe54db45cacd46796db5d53f51da6f5b74138360fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gdsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vppxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.595123 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hppjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9fe5f65-adbd-48b9-aa58-dc26c6bb32dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205005e542e6b395fe896960c605a3d4f516929d89a7fee3da8b2e9e1f9e6213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://805898245f0049964c36345427a09a4fd5ae9c60033ebc2263e59576e6ac315b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1216d5494fcfbceff998d4dbfaefd2786da042032b64666f4bcae4423e57e54b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87dca8b0f9523a164aecb796af7a770507a570fa56e95143c15e11542fc1f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7e4d94968e9f24fe093bf6d075a3e10fed56889504461c4c0279ba6dbef0439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a666b743e3df0c66f99d9822c6ef05ddc3c05d79cf6e3a7045f2e917bb66380e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e419c1d8f691c5be4220df608ea429ed457ac09da4861a565d5c9ef20c05a90b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T11:21:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T11:21:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9288\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T11:21:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hppjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.633599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.633655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.633664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.633684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.633695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.737113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.737180 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.737192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.737218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.737232 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.839953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.840007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.840020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.840043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.840062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.942640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.942699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.942711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.942731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:55 crc kubenswrapper[4766]: I0129 11:22:55.942743 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:55Z","lastTransitionTime":"2026-01-29T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.025383 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:59:37.812681867 +0000 UTC Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.046942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.046987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.046998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.047018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.047033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.150079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.150128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.150139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.150158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.150168 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.223762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.223856 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:56 crc kubenswrapper[4766]: E0129 11:22:56.224498 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:56 crc kubenswrapper[4766]: E0129 11:22:56.224591 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.253103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.253157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.253171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.253188 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.253199 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.356523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.356634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.356649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.356673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.356689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.459559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.459614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.459625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.459645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.459659 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.562712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.563699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.563745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.563776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.563816 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.667951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.668028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.668041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.668062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.668075 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.772097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.772175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.772187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.772207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.772226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.876529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.876586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.876597 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.876619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.876633 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.980356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.981355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.981373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.981396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:56 crc kubenswrapper[4766]: I0129 11:22:56.981427 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:56Z","lastTransitionTime":"2026-01-29T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.025586 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:24:15.786238231 +0000 UTC Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.084709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.084800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.084818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.084873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.084890 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.188242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.188303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.188318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.188343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.188361 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.223716 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.223931 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.224076 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.224279 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.291541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.291613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.291626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.291644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.291654 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.300014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.300094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.300110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.300134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.300149 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.314702 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.321548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.321603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.321613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.321633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.321644 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.337232 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.343803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.343860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.343877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.343900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.343914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.358971 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.365167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.365213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.365227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.365244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.365255 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.380083 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.385396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.385491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.385506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.385533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.385547 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.400138 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:22:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ba66e3-115c-4d10-9153-6b9869c521f9\\\",\\\"systemUUID\\\":\\\"e1cf5141-f02b-4b4b-ad4c-52cf74069ee2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:22:57Z is after 2025-08-24T17:21:41Z" Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.400264 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.402593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.402654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.402671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.402709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.402723 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.439868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.440086 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:57 crc kubenswrapper[4766]: E0129 11:22:57.440178 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs podName:3910984a-a754-462f-9414-183a50bb78b8 nodeName:}" failed. No retries permitted until 2026-01-29 11:24:01.440152346 +0000 UTC m=+178.552545397 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs") pod "network-metrics-daemon-xrjg5" (UID: "3910984a-a754-462f-9414-183a50bb78b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.505523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.505561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.505573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.505589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.505600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.609160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.609214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.609227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.609251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.609264 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.712294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.712340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.712350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.712368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.712380 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.816002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.816070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.816085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.816109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.816133 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.920132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.920194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.920208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.920227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:57 crc kubenswrapper[4766]: I0129 11:22:57.920241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:57Z","lastTransitionTime":"2026-01-29T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.023303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.023346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.023357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.023377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.023390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.026489 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:57:27.487329693 +0000 UTC Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.126300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.126360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.126373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.126398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.126433 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.224484 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.224611 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:22:58 crc kubenswrapper[4766]: E0129 11:22:58.224676 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:22:58 crc kubenswrapper[4766]: E0129 11:22:58.224833 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.229366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.229426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.229446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.229467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.229486 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.332718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.332766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.332778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.332795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.332807 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.435980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.436025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.436036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.436054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.436066 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.538775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.538812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.538826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.538844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.538874 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.642078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.642134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.642144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.642162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.642175 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.745737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.745792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.745803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.745822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.745835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.849291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.849358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.849372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.849393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.849408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.952553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.952607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.952619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.952636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:58 crc kubenswrapper[4766]: I0129 11:22:58.952648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:58Z","lastTransitionTime":"2026-01-29T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.027168 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:30:09.14097474 +0000 UTC Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.055791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.055865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.055889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.055921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.055945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.159505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.159575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.159594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.159614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.159628 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.223522 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.223804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:22:59 crc kubenswrapper[4766]: E0129 11:22:59.223932 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:22:59 crc kubenswrapper[4766]: E0129 11:22:59.224060 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.262898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.263023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.263040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.263067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.263084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.365357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.365448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.365464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.365490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.365512 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.470235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.470296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.470313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.470338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.470351 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.573167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.573230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.573247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.573265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.573276 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.676694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.676750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.676760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.676778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.676789 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.780153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.780192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.780205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.780223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.780234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.883781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.883883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.883898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.883921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.883936 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.986447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.986496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.986508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.986531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:22:59 crc kubenswrapper[4766]: I0129 11:22:59.986546 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:22:59Z","lastTransitionTime":"2026-01-29T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.027379 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:30:18.917888646 +0000 UTC Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.089219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.089270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.089282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.089300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.089314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.192580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.192647 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.192668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.192693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.192711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.223482 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:00 crc kubenswrapper[4766]: E0129 11:23:00.223612 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.223737 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:00 crc kubenswrapper[4766]: E0129 11:23:00.223919 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.295946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.296000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.296012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.296030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.296042 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.398934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.398985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.398997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.399029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.399042 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.502006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.502062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.502074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.502094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.502106 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.605094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.605178 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.605196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.605216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.605228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.708541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.708603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.708612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.708638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.708649 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.811988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.812038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.812052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.812079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.812094 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.914686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.914737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.914748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.914764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:00 crc kubenswrapper[4766]: I0129 11:23:00.914775 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:00Z","lastTransitionTime":"2026-01-29T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.020228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.020282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.020294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.020312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.020325 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.027633 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:14:51.673432734 +0000 UTC Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.123076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.123116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.123130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.123149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.123160 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.224072 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.224090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:01 crc kubenswrapper[4766]: E0129 11:23:01.224620 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:01 crc kubenswrapper[4766]: E0129 11:23:01.224736 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.224872 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:23:01 crc kubenswrapper[4766]: E0129 11:23:01.225042 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.225501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.225534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.225545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.225561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.225571 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.328357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.328396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.328406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.328439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.328452 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.432147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.432225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.432252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.432290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.432319 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.535901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.535960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.535973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.535990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.536004 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.638927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.638989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.639001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.639021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.639036 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.741846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.741902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.741915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.741938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.741950 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.844529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.844573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.844587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.844616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.844631 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.947241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.947296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.947307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.947325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:01 crc kubenswrapper[4766]: I0129 11:23:01.947335 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:01Z","lastTransitionTime":"2026-01-29T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.028519 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:23:20.145075018 +0000 UTC Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.049713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.049767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.049782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.049806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.049820 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.153692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.153753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.153767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.153789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.153801 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.223535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.223543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:02 crc kubenswrapper[4766]: E0129 11:23:02.223731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:02 crc kubenswrapper[4766]: E0129 11:23:02.223854 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.256498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.256552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.256562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.256581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.256592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.359735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.359784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.359796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.359815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.359826 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.463611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.463975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.464111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.464206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.464282 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.566880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.567027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.567044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.567061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.567074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.670551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.670605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.670616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.670639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.670652 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.773368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.773450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.773462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.773481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.773500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.876617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.876662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.876675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.876694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.876707 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.979583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.979670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.979687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.979710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:02 crc kubenswrapper[4766]: I0129 11:23:02.979725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:02Z","lastTransitionTime":"2026-01-29T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.028780 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:33:42.876443454 +0000 UTC Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.082748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.082794 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.082804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.082827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.082838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.185978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.186047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.186064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.186087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.186103 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.223821 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:03 crc kubenswrapper[4766]: E0129 11:23:03.223991 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.224055 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:03 crc kubenswrapper[4766]: E0129 11:23:03.224125 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.289247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.289295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.289307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.289327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.289338 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.393297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.393358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.393371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.393392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.393406 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.498724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.498793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.498804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.498824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.498842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.601976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.602016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.602027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.602045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.602058 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.704287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.704323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.704334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.704351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.704362 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.807164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.807212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.807225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.807244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.807257 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.910719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.910761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.910771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.910789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:03 crc kubenswrapper[4766]: I0129 11:23:03.910799 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:03Z","lastTransitionTime":"2026-01-29T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.013136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.013184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.013196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.013218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.013235 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.029719 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:48:21.265558644 +0000 UTC Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.116298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.116353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.116365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.116382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.116392 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.219025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.219076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.219087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.219102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.219114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.223540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.223572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:04 crc kubenswrapper[4766]: E0129 11:23:04.223652 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:04 crc kubenswrapper[4766]: E0129 11:23:04.223775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.322004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.322044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.322055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.322070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.322085 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.425376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.425477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.425493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.425515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.425530 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.528226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.528295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.528313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.528335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.528352 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.630903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.630954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.630967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.630982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.630993 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.735063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.735112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.735125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.735145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.735157 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.838861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.838916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.838927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.838947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:04 crc kubenswrapper[4766]: I0129 11:23:04.838964 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:04Z","lastTransitionTime":"2026-01-29T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:04 crc kubenswrapper[4766]: E0129 11:23:04.939499 4766 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.031067 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:05:35.858243005 +0000 UTC Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.224364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.224470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:05 crc kubenswrapper[4766]: E0129 11:23:05.224539 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:05 crc kubenswrapper[4766]: E0129 11:23:05.224632 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.256452 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=16.256399892 podStartE2EDuration="16.256399892s" podCreationTimestamp="2026-01-29 11:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.256228857 +0000 UTC m=+122.368621888" watchObservedRunningTime="2026-01-29 11:23:05.256399892 +0000 UTC m=+122.368792903" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.292559 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vppxv" podStartSLOduration=87.292531262 podStartE2EDuration="1m27.292531262s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.292331856 +0000 UTC m=+122.404724887" watchObservedRunningTime="2026-01-29 11:23:05.292531262 +0000 UTC m=+122.404924273" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.292783 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gnk2d" podStartSLOduration=86.292775959 podStartE2EDuration="1m26.292775959s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.27598233 +0000 UTC m=+122.388375341" watchObservedRunningTime="2026-01-29 11:23:05.292775959 +0000 UTC m=+122.405168980" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.320110 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hppjr" podStartSLOduration=86.320081397 podStartE2EDuration="1m26.320081397s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.319906652 +0000 UTC m=+122.432299683" watchObservedRunningTime="2026-01-29 11:23:05.320081397 +0000 UTC m=+122.432474408" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.446536 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fzj49" podStartSLOduration=87.446505752 podStartE2EDuration="1m27.446505752s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.429154237 +0000 UTC m=+122.541547248" watchObservedRunningTime="2026-01-29 11:23:05.446505752 +0000 UTC m=+122.558898763" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.465293 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.465263227 podStartE2EDuration="1m8.465263227s" podCreationTimestamp="2026-01-29 11:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.446349528 +0000 UTC m=+122.558742549" watchObservedRunningTime="2026-01-29 11:23:05.465263227 +0000 UTC m=+122.577656238" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.466039 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.466031709 podStartE2EDuration="1m9.466031709s" podCreationTimestamp="2026-01-29 11:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.461902961 +0000 UTC m=+122.574295972" watchObservedRunningTime="2026-01-29 11:23:05.466031709 +0000 UTC m=+122.578424720" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.505382 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.50535602 podStartE2EDuration="16.50535602s" podCreationTimestamp="2026-01-29 11:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.502974482 +0000 UTC m=+122.615367503" watchObservedRunningTime="2026-01-29 11:23:05.50535602 +0000 UTC m=+122.617749031" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.558818 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podStartSLOduration=86.558795144 podStartE2EDuration="1m26.558795144s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.558731952 +0000 UTC m=+122.671124983" watchObservedRunningTime="2026-01-29 11:23:05.558795144 +0000 UTC m=+122.671188175" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.578740 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.578717282 podStartE2EDuration="39.578717282s" podCreationTimestamp="2026-01-29 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.577907339 +0000 UTC m=+122.690300350" watchObservedRunningTime="2026-01-29 11:23:05.578717282 +0000 UTC m=+122.691110293" Jan 29 11:23:05 crc kubenswrapper[4766]: I0129 11:23:05.629146 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dc6zm" podStartSLOduration=86.629120799 podStartE2EDuration="1m26.629120799s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:05.628695087 +0000 UTC m=+122.741088098" watchObservedRunningTime="2026-01-29 11:23:05.629120799 +0000 UTC m=+122.741513810" Jan 29 11:23:05 crc kubenswrapper[4766]: E0129 11:23:05.847144 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:06 crc kubenswrapper[4766]: I0129 11:23:06.031778 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:45:09.128605395 +0000 UTC Jan 29 11:23:06 crc kubenswrapper[4766]: I0129 11:23:06.224383 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:06 crc kubenswrapper[4766]: I0129 11:23:06.224466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:06 crc kubenswrapper[4766]: E0129 11:23:06.224604 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:06 crc kubenswrapper[4766]: E0129 11:23:06.224687 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.032906 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:56:26.526256261 +0000 UTC Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.223979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.224037 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:07 crc kubenswrapper[4766]: E0129 11:23:07.224154 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:07 crc kubenswrapper[4766]: E0129 11:23:07.224205 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.753472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.753534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.753547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.753564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.753575 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:23:07Z","lastTransitionTime":"2026-01-29T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.801644 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl"] Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.802125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.804930 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.804987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.805234 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.806533 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.966374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d741c029-1463-4e67-abc7-6ec1cc85c568-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.966469 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d741c029-1463-4e67-abc7-6ec1cc85c568-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.966503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.966556 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d741c029-1463-4e67-abc7-6ec1cc85c568-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:07 crc kubenswrapper[4766]: I0129 11:23:07.966602 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.033468 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:12:19.009187155 +0000 UTC Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.033578 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.047925 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.067841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d741c029-1463-4e67-abc7-6ec1cc85c568-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.067959 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d741c029-1463-4e67-abc7-6ec1cc85c568-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.067997 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.068067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d741c029-1463-4e67-abc7-6ec1cc85c568-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.068144 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.068218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.068308 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d741c029-1463-4e67-abc7-6ec1cc85c568-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.069191 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d741c029-1463-4e67-abc7-6ec1cc85c568-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.076951 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d741c029-1463-4e67-abc7-6ec1cc85c568-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.089920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d741c029-1463-4e67-abc7-6ec1cc85c568-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdltl\" (UID: \"d741c029-1463-4e67-abc7-6ec1cc85c568\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.118794 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" Jan 29 11:23:08 crc kubenswrapper[4766]: W0129 11:23:08.143210 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd741c029_1463_4e67_abc7_6ec1cc85c568.slice/crio-e6fb26c3355e7608f3e313891ced43c4d69ed96d57b14e5368a50db06c17d914 WatchSource:0}: Error finding container e6fb26c3355e7608f3e313891ced43c4d69ed96d57b14e5368a50db06c17d914: Status 404 returned error can't find the container with id e6fb26c3355e7608f3e313891ced43c4d69ed96d57b14e5368a50db06c17d914 Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.223630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.223777 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:08 crc kubenswrapper[4766]: E0129 11:23:08.224337 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:08 crc kubenswrapper[4766]: E0129 11:23:08.224565 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:08 crc kubenswrapper[4766]: I0129 11:23:08.247990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" event={"ID":"d741c029-1463-4e67-abc7-6ec1cc85c568","Type":"ContainerStarted","Data":"e6fb26c3355e7608f3e313891ced43c4d69ed96d57b14e5368a50db06c17d914"} Jan 29 11:23:09 crc kubenswrapper[4766]: I0129 11:23:09.224340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:09 crc kubenswrapper[4766]: I0129 11:23:09.224494 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:09 crc kubenswrapper[4766]: E0129 11:23:09.225618 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:09 crc kubenswrapper[4766]: E0129 11:23:09.225880 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:09 crc kubenswrapper[4766]: I0129 11:23:09.253722 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" event={"ID":"d741c029-1463-4e67-abc7-6ec1cc85c568","Type":"ContainerStarted","Data":"29d638fe4522fb6df0aef746b2865c871a6c410cb5bfdd04ff54392409aa73a6"} Jan 29 11:23:09 crc kubenswrapper[4766]: I0129 11:23:09.272598 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdltl" podStartSLOduration=90.272571986 podStartE2EDuration="1m30.272571986s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:09.272209475 +0000 UTC m=+126.384602486" watchObservedRunningTime="2026-01-29 11:23:09.272571986 +0000 UTC m=+126.384964997" Jan 29 11:23:10 crc kubenswrapper[4766]: I0129 11:23:10.224477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:10 crc kubenswrapper[4766]: I0129 11:23:10.224655 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:10 crc kubenswrapper[4766]: E0129 11:23:10.224926 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:10 crc kubenswrapper[4766]: E0129 11:23:10.225074 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:10 crc kubenswrapper[4766]: E0129 11:23:10.849291 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:11 crc kubenswrapper[4766]: I0129 11:23:11.224492 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:11 crc kubenswrapper[4766]: I0129 11:23:11.224639 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:11 crc kubenswrapper[4766]: E0129 11:23:11.224731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:11 crc kubenswrapper[4766]: E0129 11:23:11.224919 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:12 crc kubenswrapper[4766]: I0129 11:23:12.224442 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:12 crc kubenswrapper[4766]: I0129 11:23:12.224452 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:12 crc kubenswrapper[4766]: E0129 11:23:12.225074 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:12 crc kubenswrapper[4766]: E0129 11:23:12.225253 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:12 crc kubenswrapper[4766]: I0129 11:23:12.225376 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:23:12 crc kubenswrapper[4766]: E0129 11:23:12.225568 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:23:13 crc kubenswrapper[4766]: I0129 11:23:13.224321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:13 crc kubenswrapper[4766]: E0129 11:23:13.224581 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:13 crc kubenswrapper[4766]: I0129 11:23:13.224637 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:13 crc kubenswrapper[4766]: E0129 11:23:13.224903 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:14 crc kubenswrapper[4766]: I0129 11:23:14.223509 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:14 crc kubenswrapper[4766]: I0129 11:23:14.223509 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:14 crc kubenswrapper[4766]: E0129 11:23:14.223672 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:14 crc kubenswrapper[4766]: E0129 11:23:14.223799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:15 crc kubenswrapper[4766]: I0129 11:23:15.223485 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:15 crc kubenswrapper[4766]: I0129 11:23:15.223553 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:15 crc kubenswrapper[4766]: E0129 11:23:15.224563 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:15 crc kubenswrapper[4766]: E0129 11:23:15.224754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:15 crc kubenswrapper[4766]: E0129 11:23:15.850487 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:16 crc kubenswrapper[4766]: I0129 11:23:16.224348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:16 crc kubenswrapper[4766]: I0129 11:23:16.224462 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:16 crc kubenswrapper[4766]: E0129 11:23:16.224596 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:16 crc kubenswrapper[4766]: E0129 11:23:16.224801 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:17 crc kubenswrapper[4766]: I0129 11:23:17.223756 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:17 crc kubenswrapper[4766]: I0129 11:23:17.223793 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:17 crc kubenswrapper[4766]: E0129 11:23:17.223996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:17 crc kubenswrapper[4766]: E0129 11:23:17.224164 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:18 crc kubenswrapper[4766]: I0129 11:23:18.224155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:18 crc kubenswrapper[4766]: I0129 11:23:18.224155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:18 crc kubenswrapper[4766]: E0129 11:23:18.224384 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:18 crc kubenswrapper[4766]: E0129 11:23:18.224518 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:19 crc kubenswrapper[4766]: I0129 11:23:19.224141 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:19 crc kubenswrapper[4766]: I0129 11:23:19.224192 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:19 crc kubenswrapper[4766]: E0129 11:23:19.224365 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:19 crc kubenswrapper[4766]: E0129 11:23:19.224780 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:20 crc kubenswrapper[4766]: I0129 11:23:20.223884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:20 crc kubenswrapper[4766]: I0129 11:23:20.223911 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:20 crc kubenswrapper[4766]: E0129 11:23:20.224090 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:20 crc kubenswrapper[4766]: E0129 11:23:20.224216 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:20 crc kubenswrapper[4766]: E0129 11:23:20.851686 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:21 crc kubenswrapper[4766]: I0129 11:23:21.223761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:21 crc kubenswrapper[4766]: I0129 11:23:21.223776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:21 crc kubenswrapper[4766]: E0129 11:23:21.224030 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:21 crc kubenswrapper[4766]: E0129 11:23:21.224183 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:22 crc kubenswrapper[4766]: I0129 11:23:22.224058 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:22 crc kubenswrapper[4766]: I0129 11:23:22.224058 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:22 crc kubenswrapper[4766]: E0129 11:23:22.224235 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:22 crc kubenswrapper[4766]: E0129 11:23:22.224319 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:23 crc kubenswrapper[4766]: I0129 11:23:23.224391 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:23 crc kubenswrapper[4766]: I0129 11:23:23.224517 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:23 crc kubenswrapper[4766]: E0129 11:23:23.224958 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:23 crc kubenswrapper[4766]: E0129 11:23:23.224880 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:23 crc kubenswrapper[4766]: I0129 11:23:23.225486 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:23:23 crc kubenswrapper[4766]: E0129 11:23:23.225726 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zn4kn_openshift-ovn-kubernetes(98622e63-ce1a-413d-8a0a-32610d52ab94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" Jan 29 11:23:24 crc kubenswrapper[4766]: I0129 11:23:24.223627 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:24 crc kubenswrapper[4766]: I0129 11:23:24.223660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:24 crc kubenswrapper[4766]: E0129 11:23:24.223907 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:24 crc kubenswrapper[4766]: E0129 11:23:24.224024 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:25 crc kubenswrapper[4766]: I0129 11:23:25.223766 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:25 crc kubenswrapper[4766]: I0129 11:23:25.223797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:25 crc kubenswrapper[4766]: E0129 11:23:25.225816 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:25 crc kubenswrapper[4766]: E0129 11:23:25.226069 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:25 crc kubenswrapper[4766]: E0129 11:23:25.852432 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.223817 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.223873 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:26 crc kubenswrapper[4766]: E0129 11:23:26.224055 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:26 crc kubenswrapper[4766]: E0129 11:23:26.224120 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.318908 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/1.log" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.319513 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/0.log" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.319565 4766 generic.go:334] "Generic (PLEG): container finished" podID="6986483f-6521-45da-9034-8576037c32ad" containerID="f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684" exitCode=1 Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.319613 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerDied","Data":"f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684"} Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.319711 4766 scope.go:117] "RemoveContainer" containerID="a9b01724cc972fcb6585d91e681d70640814c1429f20e331f25307d8d5c04c36" Jan 29 11:23:26 crc kubenswrapper[4766]: I0129 11:23:26.320322 4766 scope.go:117] "RemoveContainer" containerID="f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684" Jan 29 11:23:26 crc kubenswrapper[4766]: E0129 11:23:26.320622 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gnk2d_openshift-multus(6986483f-6521-45da-9034-8576037c32ad)\"" pod="openshift-multus/multus-gnk2d" podUID="6986483f-6521-45da-9034-8576037c32ad" Jan 29 11:23:27 crc kubenswrapper[4766]: I0129 11:23:27.223514 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:27 crc kubenswrapper[4766]: I0129 11:23:27.223581 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:27 crc kubenswrapper[4766]: E0129 11:23:27.223693 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:27 crc kubenswrapper[4766]: E0129 11:23:27.223875 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:27 crc kubenswrapper[4766]: I0129 11:23:27.326255 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/1.log" Jan 29 11:23:28 crc kubenswrapper[4766]: I0129 11:23:28.223558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:28 crc kubenswrapper[4766]: E0129 11:23:28.223833 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:28 crc kubenswrapper[4766]: I0129 11:23:28.223630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:28 crc kubenswrapper[4766]: E0129 11:23:28.224767 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:29 crc kubenswrapper[4766]: I0129 11:23:29.224000 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:29 crc kubenswrapper[4766]: I0129 11:23:29.224049 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:29 crc kubenswrapper[4766]: E0129 11:23:29.224156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:29 crc kubenswrapper[4766]: E0129 11:23:29.224307 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:30 crc kubenswrapper[4766]: I0129 11:23:30.223840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:30 crc kubenswrapper[4766]: I0129 11:23:30.223895 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:30 crc kubenswrapper[4766]: E0129 11:23:30.224564 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:30 crc kubenswrapper[4766]: E0129 11:23:30.224972 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:30 crc kubenswrapper[4766]: E0129 11:23:30.854784 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:31 crc kubenswrapper[4766]: I0129 11:23:31.224049 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:31 crc kubenswrapper[4766]: I0129 11:23:31.224199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:31 crc kubenswrapper[4766]: E0129 11:23:31.224263 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:31 crc kubenswrapper[4766]: E0129 11:23:31.224487 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:32 crc kubenswrapper[4766]: I0129 11:23:32.223878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:32 crc kubenswrapper[4766]: I0129 11:23:32.223976 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:32 crc kubenswrapper[4766]: E0129 11:23:32.224056 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:32 crc kubenswrapper[4766]: E0129 11:23:32.224175 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:33 crc kubenswrapper[4766]: I0129 11:23:33.224521 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:33 crc kubenswrapper[4766]: I0129 11:23:33.224634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:33 crc kubenswrapper[4766]: E0129 11:23:33.224720 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:33 crc kubenswrapper[4766]: E0129 11:23:33.224841 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.224370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.224404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:34 crc kubenswrapper[4766]: E0129 11:23:34.224898 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:34 crc kubenswrapper[4766]: E0129 11:23:34.225301 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.225714 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.356518 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/3.log" Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.361198 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerStarted","Data":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:23:34 crc kubenswrapper[4766]: I0129 11:23:34.361741 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:23:35 crc kubenswrapper[4766]: I0129 11:23:35.223543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:35 crc kubenswrapper[4766]: I0129 11:23:35.223622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:35 crc kubenswrapper[4766]: E0129 11:23:35.224658 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:35 crc kubenswrapper[4766]: E0129 11:23:35.224788 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:35 crc kubenswrapper[4766]: I0129 11:23:35.229888 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podStartSLOduration=116.229869491 podStartE2EDuration="1m56.229869491s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:34.391782634 +0000 UTC m=+151.504175665" watchObservedRunningTime="2026-01-29 11:23:35.229869491 +0000 UTC m=+152.342262502" Jan 29 11:23:35 crc kubenswrapper[4766]: I0129 11:23:35.230296 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrjg5"] Jan 29 11:23:35 crc kubenswrapper[4766]: I0129 11:23:35.364310 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:35 crc kubenswrapper[4766]: E0129 11:23:35.364472 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:35 crc kubenswrapper[4766]: E0129 11:23:35.855376 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:36 crc kubenswrapper[4766]: I0129 11:23:36.224171 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:36 crc kubenswrapper[4766]: E0129 11:23:36.224321 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:36 crc kubenswrapper[4766]: I0129 11:23:36.224171 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:36 crc kubenswrapper[4766]: E0129 11:23:36.224596 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:37 crc kubenswrapper[4766]: I0129 11:23:37.224494 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:37 crc kubenswrapper[4766]: I0129 11:23:37.224622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:37 crc kubenswrapper[4766]: E0129 11:23:37.224679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:37 crc kubenswrapper[4766]: E0129 11:23:37.224793 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:38 crc kubenswrapper[4766]: I0129 11:23:38.223679 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:38 crc kubenswrapper[4766]: I0129 11:23:38.223730 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:38 crc kubenswrapper[4766]: E0129 11:23:38.223857 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:38 crc kubenswrapper[4766]: E0129 11:23:38.224020 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:39 crc kubenswrapper[4766]: I0129 11:23:39.223790 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:39 crc kubenswrapper[4766]: I0129 11:23:39.223805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:39 crc kubenswrapper[4766]: E0129 11:23:39.224080 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:39 crc kubenswrapper[4766]: I0129 11:23:39.224206 4766 scope.go:117] "RemoveContainer" containerID="f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684" Jan 29 11:23:39 crc kubenswrapper[4766]: E0129 11:23:39.224257 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:40 crc kubenswrapper[4766]: I0129 11:23:40.223877 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:40 crc kubenswrapper[4766]: I0129 11:23:40.223902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:40 crc kubenswrapper[4766]: E0129 11:23:40.224487 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:40 crc kubenswrapper[4766]: E0129 11:23:40.224604 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:40 crc kubenswrapper[4766]: I0129 11:23:40.386670 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/1.log" Jan 29 11:23:40 crc kubenswrapper[4766]: I0129 11:23:40.386758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerStarted","Data":"bd6d2609f7daaf516c85d29c744307fe0c6788ba02d9625f66fa94efe9993566"} Jan 29 11:23:40 crc kubenswrapper[4766]: E0129 11:23:40.857952 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:23:41 crc kubenswrapper[4766]: I0129 11:23:41.224133 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:41 crc kubenswrapper[4766]: I0129 11:23:41.224192 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:41 crc kubenswrapper[4766]: E0129 11:23:41.224319 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:41 crc kubenswrapper[4766]: E0129 11:23:41.224389 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:42 crc kubenswrapper[4766]: I0129 11:23:42.223609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:42 crc kubenswrapper[4766]: I0129 11:23:42.223761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:42 crc kubenswrapper[4766]: E0129 11:23:42.223830 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:42 crc kubenswrapper[4766]: E0129 11:23:42.223963 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:43 crc kubenswrapper[4766]: I0129 11:23:43.223878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:43 crc kubenswrapper[4766]: I0129 11:23:43.223952 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:43 crc kubenswrapper[4766]: E0129 11:23:43.224038 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:43 crc kubenswrapper[4766]: E0129 11:23:43.224178 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:44 crc kubenswrapper[4766]: I0129 11:23:44.223956 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:44 crc kubenswrapper[4766]: I0129 11:23:44.224024 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:44 crc kubenswrapper[4766]: E0129 11:23:44.224117 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:23:44 crc kubenswrapper[4766]: E0129 11:23:44.224186 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:23:45 crc kubenswrapper[4766]: I0129 11:23:45.224241 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:45 crc kubenswrapper[4766]: I0129 11:23:45.224305 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:45 crc kubenswrapper[4766]: E0129 11:23:45.225458 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:23:45 crc kubenswrapper[4766]: E0129 11:23:45.225589 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjg5" podUID="3910984a-a754-462f-9414-183a50bb78b8" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.223536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.223570 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.226725 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.226875 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.226948 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.227546 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:23:46 crc kubenswrapper[4766]: I0129 11:23:46.387585 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:23:47 crc kubenswrapper[4766]: I0129 11:23:47.224617 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:23:47 crc kubenswrapper[4766]: I0129 11:23:47.224617 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:47 crc kubenswrapper[4766]: I0129 11:23:47.227854 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:23:47 crc kubenswrapper[4766]: I0129 11:23:47.227874 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.066042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.103124 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n4rj2"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.103959 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q65jj"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.104324 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.104824 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.105572 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.106384 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.106415 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.107139 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.107677 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.108251 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.108391 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bqx75"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.109159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.109950 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.110722 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.111568 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.111977 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.115640 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.116107 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.150855 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.150879 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.151080 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.159120 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.165071 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.165961 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.166148 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.166165 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.166266 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.167932 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.177015 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.185549 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.185645 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.193209 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.194494 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.194958 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.195213 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.195386 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.195451 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.195770 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196256 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196486 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196536 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwtsb"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196636 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196769 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196871 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.196966 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197023 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197120 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197141 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197282 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197306 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197371 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.197742 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.198201 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.198596 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.202456 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.203146 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.207897 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.208199 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.223765 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.223867 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.246219 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.246663 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.248839 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.249272 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.249654 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258018 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258220 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258615 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258754 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258808 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258868 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.258996 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259099 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259155 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259181 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259275 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259346 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259394 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259746 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259839 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.259866 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260015 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260222 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260562 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260671 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.260863 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.261093 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.261519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.261805 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.261951 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84fks\" (UniqueName: \"kubernetes.io/projected/d2900468-bc28-42ef-8624-0e5b0a80f772-kube-api-access-84fks\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/22f4cece-ea69-4c25-b492-8d03d960353e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267197 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm67\" (UniqueName: \"kubernetes.io/projected/f0a6fc20-9a8f-4e97-8689-890f8a931a86-kube-api-access-lsm67\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvpj\" (UniqueName: \"kubernetes.io/projected/5ab22459-f606-452e-a71d-9f7e9212518d-kube-api-access-ttvpj\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267296 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit-dir\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267316 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97jh\" (UniqueName: \"kubernetes.io/projected/a99b07fd-7413-4523-8812-f0c7fe540f6d-kube-api-access-l97jh\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267359 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-client\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-config\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267395 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-encryption-config\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267441 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-dir\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-serving-cert\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267599 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab22459-f606-452e-a71d-9f7e9212518d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-serving-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267699 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6knvg\" (UniqueName: \"kubernetes.io/projected/3992a1ef-5774-468c-9640-cd23218862cc-kube-api-access-6knvg\") pod \"downloads-7954f5f757-bqx75\" (UID: \"3992a1ef-5774-468c-9640-cd23218862cc\") " pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-images\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267756 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2900468-bc28-42ef-8624-0e5b0a80f772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-policies\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267790 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267827 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-image-import-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-client\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267877 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267897 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267919 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267943 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267968 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk6x\" (UniqueName: \"kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.267991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-node-pullsecrets\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqh4q\" (UniqueName: \"kubernetes.io/projected/22f4cece-ea69-4c25-b492-8d03d960353e-kube-api-access-fqh4q\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-serving-cert\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268106 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268129 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-encryption-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2900468-bc28-42ef-8624-0e5b0a80f772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268239 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268262 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxpp\" (UniqueName: \"kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkznx\" (UniqueName: \"kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.268615 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.269589 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vnx7s"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.269647 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.272699 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.273125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.273727 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.286594 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.287831 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.288601 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.287858 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.290816 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x9vrs"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.291633 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.297819 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.298034 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.297831 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.352890 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.353159 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.353404 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.354010 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.354641 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7zssb"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.355492 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.355642 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.355933 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.356275 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369547 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-encryption-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e7553566-6a69-4542-892b-bd74d3c8ac0e-machine-approver-tls\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369699 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2900468-bc28-42ef-8624-0e5b0a80f772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369736 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l5ps\" (UniqueName: \"kubernetes.io/projected/e0d3c828-9641-4030-acfc-282a4dadcf1d-kube-api-access-8l5ps\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee06c09-afd6-4909-a722-2812c4c391b7-serving-cert\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369839 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bsn\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-kube-api-access-d2bsn\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369858 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxpp\" (UniqueName: \"kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369912 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369931 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkznx\" (UniqueName: \"kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369949 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369966 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.369997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370018 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84fks\" (UniqueName: \"kubernetes.io/projected/d2900468-bc28-42ef-8624-0e5b0a80f772-kube-api-access-84fks\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/22f4cece-ea69-4c25-b492-8d03d960353e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsm67\" (UniqueName: \"kubernetes.io/projected/f0a6fc20-9a8f-4e97-8689-890f8a931a86-kube-api-access-lsm67\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8hf\" (UniqueName: \"kubernetes.io/projected/6ee06c09-afd6-4909-a722-2812c4c391b7-kube-api-access-vr8hf\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370191 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttvpj\" (UniqueName: \"kubernetes.io/projected/5ab22459-f606-452e-a71d-9f7e9212518d-kube-api-access-ttvpj\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit-dir\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l97jh\" (UniqueName: \"kubernetes.io/projected/a99b07fd-7413-4523-8812-f0c7fe540f6d-kube-api-access-l97jh\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-client\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-config\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-config\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370332 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-encryption-config\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-dir\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370395 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd9l\" (UniqueName: \"kubernetes.io/projected/e7553566-6a69-4542-892b-bd74d3c8ac0e-kube-api-access-dzd9l\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-serving-cert\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370484 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3c828-9641-4030-acfc-282a4dadcf1d-serving-cert\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab22459-f606-452e-a71d-9f7e9212518d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370648 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skv2q\" (UniqueName: \"kubernetes.io/projected/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-kube-api-access-skv2q\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mrt\" (UniqueName: \"kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6knvg\" (UniqueName: \"kubernetes.io/projected/3992a1ef-5774-468c-9640-cd23218862cc-kube-api-access-6knvg\") pod \"downloads-7954f5f757-bqx75\" (UID: \"3992a1ef-5774-468c-9640-cd23218862cc\") " pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-images\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-serving-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370936 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2900468-bc28-42ef-8624-0e5b0a80f772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-policies\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.370987 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371003 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-trusted-ca\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-client\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-image-import-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371073 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371111 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371148 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371167 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371186 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxk6x\" (UniqueName: \"kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-config\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371252 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371294 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-node-pullsecrets\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-auth-proxy-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqh4q\" (UniqueName: \"kubernetes.io/projected/22f4cece-ea69-4c25-b492-8d03d960353e-kube-api-access-fqh4q\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-serving-cert\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.371391 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.373378 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-config\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.374252 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.375780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.375824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-dir\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.376469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.377358 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.378069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.378172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.378251 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-node-pullsecrets\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.379011 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-image-import-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.379036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22f4cece-ea69-4c25-b492-8d03d960353e-images\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.379097 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.380046 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.380573 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.380709 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.381023 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.381326 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.381510 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.381666 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.381888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.382054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.382072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.382255 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.382662 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit-dir\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.383132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.383334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.383612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.384081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab22459-f606-452e-a71d-9f7e9212518d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.384092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.384181 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.384313 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.384464 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.385716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-audit\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.386728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-serving-ca\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.387559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.387991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-serving-cert\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.388208 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.391539 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.391757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-etcd-client\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.392744 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.392769 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.393316 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-encryption-config\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.393541 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.394035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2900468-bc28-42ef-8624-0e5b0a80f772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.394913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.394939 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.395637 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a99b07fd-7413-4523-8812-f0c7fe540f6d-serving-cert\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.395898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0a6fc20-9a8f-4e97-8689-890f8a931a86-audit-policies\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.396157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.396228 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.396761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.398877 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.401585 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.402190 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-encryption-config\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.402489 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.402835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.403077 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.403328 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.403531 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.403573 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.403933 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.404120 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.404138 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0a6fc20-9a8f-4e97-8689-890f8a931a86-etcd-client\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.404411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2900468-bc28-42ef-8624-0e5b0a80f772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.404543 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.405081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/22f4cece-ea69-4c25-b492-8d03d960353e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.406821 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.407495 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.410082 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.412456 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.419416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.425128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.427162 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.427351 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.431179 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.432063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.432622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.435364 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.436194 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.436715 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.436909 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d8l67"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.437001 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.437904 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.439014 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jjfht"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.440595 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q65jj"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.440688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.441176 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.441877 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.442457 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h54ww"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.443127 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.443496 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.444208 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.445162 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.445819 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.445933 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.447233 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.447589 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.448185 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.448421 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.449890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.450019 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.451017 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zxdzm"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.451569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.451776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.452136 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.452709 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.453577 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.454795 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n4rj2"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.456011 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cnns4"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.457153 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.457495 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.458684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.461275 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwtsb"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.462349 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vnx7s"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.463751 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.464863 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.466369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.466750 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.467216 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.468183 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.468485 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.469721 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x9vrs"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b89e415-2430-4431-a579-fe555ba8771f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472342 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df181b4a-3b70-456c-9fd8-c1d03bee42f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm7x\" (UniqueName: \"kubernetes.io/projected/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-kube-api-access-sfm7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-config\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-serving-cert\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-auth-proxy-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472553 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472658 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b89e415-2430-4431-a579-fe555ba8771f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472691 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472712 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b89e415-2430-4431-a579-fe555ba8771f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e7553566-6a69-4542-892b-bd74d3c8ac0e-machine-approver-tls\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.472755 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.473236 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.473643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.473823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ded8a8f-f67c-422c-9818-d2ac883d4026-proxy-tls\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.473886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxkp\" (UniqueName: \"kubernetes.io/projected/9628ffe6-8bd9-40c2-82d9-d844078b7086-kube-api-access-phxkp\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.473964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474041 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee06c09-afd6-4909-a722-2812c4c391b7-serving-cert\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474507 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474608 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-auth-proxy-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-config\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9628ffe6-8bd9-40c2-82d9-d844078b7086-metrics-tls\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l5ps\" (UniqueName: \"kubernetes.io/projected/e0d3c828-9641-4030-acfc-282a4dadcf1d-kube-api-access-8l5ps\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474837 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.474868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bsn\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-kube-api-access-d2bsn\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df181b4a-3b70-456c-9fd8-c1d03bee42f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4gt\" (UniqueName: \"kubernetes.io/projected/1ded8a8f-f67c-422c-9818-d2ac883d4026-kube-api-access-6v4gt\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.475980 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8hf\" (UniqueName: \"kubernetes.io/projected/6ee06c09-afd6-4909-a722-2812c4c391b7-kube-api-access-vr8hf\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df181b4a-3b70-456c-9fd8-c1d03bee42f5-config\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-service-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-config\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcv9\" (UniqueName: \"kubernetes.io/projected/1e7de4ba-321d-4b46-b66b-cc1f437bd804-kube-api-access-mqcv9\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-config\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-client\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476251 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-images\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzd9l\" (UniqueName: \"kubernetes.io/projected/e7553566-6a69-4542-892b-bd74d3c8ac0e-kube-api-access-dzd9l\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3c828-9641-4030-acfc-282a4dadcf1d-serving-cert\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476676 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skv2q\" (UniqueName: \"kubernetes.io/projected/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-kube-api-access-skv2q\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4mrt\" (UniqueName: \"kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-trusted-ca\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.477062 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.477086 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.477139 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.476116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.477689 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.477891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.478229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-config\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.478371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.478481 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.478767 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.478942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7553566-6a69-4542-892b-bd74d3c8ac0e-config\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.479563 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.480505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.480643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.481164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ee06c09-afd6-4909-a722-2812c4c391b7-trusted-ca\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.481637 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.481979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.482562 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d3c828-9641-4030-acfc-282a4dadcf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.482846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.482943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.483365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.485474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee06c09-afd6-4909-a722-2812c4c391b7-serving-cert\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.485603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3c828-9641-4030-acfc-282a4dadcf1d-serving-cert\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.485757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.485900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.486315 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e7553566-6a69-4542-892b-bd74d3c8ac0e-machine-approver-tls\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.486702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.488410 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-crmjf"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.491771 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.495993 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.498709 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.502058 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bqx75"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.502305 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.505211 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.507178 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.507517 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.509802 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7zssb"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.513684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.515313 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zxdzm"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.516846 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.518439 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cnns4"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.520346 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.522336 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.523862 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.525345 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d8l67"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.526873 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.527378 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.528350 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kgjkl"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.529057 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.530243 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-crmjf"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.532099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.533811 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.535659 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bkp45"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.536645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.538284 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jjfht"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.539806 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kgjkl"] Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.547293 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.566700 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.579703 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-metrics-certs\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.579754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5v6\" (UniqueName: \"kubernetes.io/projected/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-kube-api-access-qm5v6\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.579784 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhc5\" (UniqueName: \"kubernetes.io/projected/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-kube-api-access-dkhc5\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.580111 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.580985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-service-ca-bundle\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4gt\" (UniqueName: \"kubernetes.io/projected/1ded8a8f-f67c-422c-9818-d2ac883d4026-kube-api-access-6v4gt\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581055 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581074 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b47f\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-kube-api-access-4b47f\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpbg\" (UniqueName: \"kubernetes.io/projected/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-kube-api-access-bcpbg\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.581277 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-client\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-images\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582254 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-csi-data-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4846\" (UniqueName: \"kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582438 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a78c46d-38f6-4cae-9fa7-36adb60b921e-trusted-ca\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582507 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b89e415-2430-4431-a579-fe555ba8771f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582534 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7a96ad2-50e7-4cc9-8070-185cb9d97774-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582563 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-serving-cert\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582590 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lqg\" (UniqueName: \"kubernetes.io/projected/699d99f6-a65f-4822-80af-7f254046575f-kube-api-access-58lqg\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582653 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnnlh\" (UniqueName: \"kubernetes.io/projected/89eca04b-5abc-42ee-8878-094433bfe94b-kube-api-access-wnnlh\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zslq7\" (UniqueName: \"kubernetes.io/projected/77154bcb-c7aa-4ee4-b8a4-3e599a303191-kube-api-access-zslq7\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582731 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77154bcb-c7aa-4ee4-b8a4-3e599a303191-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582757 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582781 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwffn\" (UniqueName: \"kubernetes.io/projected/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-kube-api-access-kwffn\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582824 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vw88\" (UniqueName: \"kubernetes.io/projected/7d34fd54-1d88-420a-a4cc-405e9d3900ab-kube-api-access-2vw88\") pod \"migrator-59844c95c7-n8m6h\" (UID: \"7d34fd54-1d88-420a-a4cc-405e9d3900ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-config\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b89e415-2430-4431-a579-fe555ba8771f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ded8a8f-f67c-422c-9818-d2ac883d4026-proxy-tls\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582955 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.582977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxkp\" (UniqueName: \"kubernetes.io/projected/9628ffe6-8bd9-40c2-82d9-d844078b7086-kube-api-access-phxkp\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583013 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-registration-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9628ffe6-8bd9-40c2-82d9-d844078b7086-metrics-tls\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tvz4\" (UniqueName: \"kubernetes.io/projected/089ac99c-bbe9-48d2-81fe-a021c3c218a4-kube-api-access-2tvz4\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583186 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-tmpfs\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583211 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq46f\" (UniqueName: \"kubernetes.io/projected/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-kube-api-access-lq46f\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzx4\" (UniqueName: \"kubernetes.io/projected/c9f47551-239d-46a0-857b-34eec9853f06-kube-api-access-mkzx4\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w82jt\" (UniqueName: \"kubernetes.io/projected/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-kube-api-access-w82jt\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583305 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-srv-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583342 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7a96ad2-50e7-4cc9-8070-185cb9d97774-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df181b4a-3b70-456c-9fd8-c1d03bee42f5-config\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-service-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583469 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-config\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583536 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcv9\" (UniqueName: \"kubernetes.io/projected/1e7de4ba-321d-4b46-b66b-cc1f437bd804-kube-api-access-mqcv9\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583580 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-socket-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a78c46d-38f6-4cae-9fa7-36adb60b921e-metrics-tls\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583685 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-default-certificate\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/89eca04b-5abc-42ee-8878-094433bfe94b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmf2l\" (UniqueName: \"kubernetes.io/projected/06681e16-3449-44aa-9680-1f1566bca8f3-kube-api-access-cmf2l\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrl2\" (UniqueName: \"kubernetes.io/projected/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-kube-api-access-5mrl2\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583866 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df181b4a-3b70-456c-9fd8-c1d03bee42f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfm7x\" (UniqueName: \"kubernetes.io/projected/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-kube-api-access-sfm7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.583939 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.584325 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-serving-cert\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.584367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.584404 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs67p\" (UniqueName: \"kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.584456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77154bcb-c7aa-4ee4-b8a4-3e599a303191-proxy-tls\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b89e415-2430-4431-a579-fe555ba8771f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-plugins-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585148 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a96ad2-50e7-4cc9-8070-185cb9d97774-config\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585171 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585196 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-config\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-stats-auth\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585333 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585684 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df181b4a-3b70-456c-9fd8-c1d03bee42f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-mountpoint-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.585946 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.586444 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.586518 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-service-ca\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.587826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df181b4a-3b70-456c-9fd8-c1d03bee42f5-config\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.588514 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.589374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-etcd-client\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.589752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df181b4a-3b70-456c-9fd8-c1d03bee42f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.589958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b89e415-2430-4431-a579-fe555ba8771f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.591126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7de4ba-321d-4b46-b66b-cc1f437bd804-serving-cert\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.594868 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b89e415-2430-4431-a579-fe555ba8771f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.607285 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.619811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9628ffe6-8bd9-40c2-82d9-d844078b7086-metrics-tls\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.626862 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.647485 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.682878 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkznx\" (UniqueName: \"kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx\") pod \"controller-manager-879f6c89f-zj2l7\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-srv-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7a96ad2-50e7-4cc9-8070-185cb9d97774-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-socket-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a78c46d-38f6-4cae-9fa7-36adb60b921e-metrics-tls\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-default-certificate\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-socket-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.687957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/89eca04b-5abc-42ee-8878-094433bfe94b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmf2l\" (UniqueName: \"kubernetes.io/projected/06681e16-3449-44aa-9680-1f1566bca8f3-kube-api-access-cmf2l\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688172 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mrl2\" (UniqueName: \"kubernetes.io/projected/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-kube-api-access-5mrl2\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688197 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77154bcb-c7aa-4ee4-b8a4-3e599a303191-proxy-tls\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs67p\" (UniqueName: \"kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-plugins-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688624 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a96ad2-50e7-4cc9-8070-185cb9d97774-config\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688648 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-plugins-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-stats-auth\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688827 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-mountpoint-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-metrics-certs\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5v6\" (UniqueName: \"kubernetes.io/projected/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-kube-api-access-qm5v6\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688918 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhc5\" (UniqueName: \"kubernetes.io/projected/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-kube-api-access-dkhc5\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.688992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-service-ca-bundle\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689017 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689041 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-mountpoint-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b47f\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-kube-api-access-4b47f\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcpbg\" (UniqueName: \"kubernetes.io/projected/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-kube-api-access-bcpbg\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689235 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-csi-data-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689327 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689354 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689376 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4846\" (UniqueName: \"kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689459 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a78c46d-38f6-4cae-9fa7-36adb60b921e-trusted-ca\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7a96ad2-50e7-4cc9-8070-185cb9d97774-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689513 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-serving-cert\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-csi-data-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lqg\" (UniqueName: \"kubernetes.io/projected/699d99f6-a65f-4822-80af-7f254046575f-kube-api-access-58lqg\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnnlh\" (UniqueName: \"kubernetes.io/projected/89eca04b-5abc-42ee-8878-094433bfe94b-kube-api-access-wnnlh\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689654 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zslq7\" (UniqueName: \"kubernetes.io/projected/77154bcb-c7aa-4ee4-b8a4-3e599a303191-kube-api-access-zslq7\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689673 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwffn\" (UniqueName: \"kubernetes.io/projected/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-kube-api-access-kwffn\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77154bcb-c7aa-4ee4-b8a4-3e599a303191-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vw88\" (UniqueName: \"kubernetes.io/projected/7d34fd54-1d88-420a-a4cc-405e9d3900ab-kube-api-access-2vw88\") pod \"migrator-59844c95c7-n8m6h\" (UID: \"7d34fd54-1d88-420a-a4cc-405e9d3900ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-config\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-registration-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689965 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvz4\" (UniqueName: \"kubernetes.io/projected/089ac99c-bbe9-48d2-81fe-a021c3c218a4-kube-api-access-2tvz4\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.689989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06681e16-3449-44aa-9680-1f1566bca8f3-registration-dir\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690003 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-tmpfs\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690025 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq46f\" (UniqueName: \"kubernetes.io/projected/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-kube-api-access-lq46f\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690054 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzx4\" (UniqueName: \"kubernetes.io/projected/c9f47551-239d-46a0-857b-34eec9853f06-kube-api-access-mkzx4\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690090 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w82jt\" (UniqueName: \"kubernetes.io/projected/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-kube-api-access-w82jt\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-tmpfs\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.690857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77154bcb-c7aa-4ee4-b8a4-3e599a303191-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.702716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6knvg\" (UniqueName: \"kubernetes.io/projected/3992a1ef-5774-468c-9640-cd23218862cc-kube-api-access-6knvg\") pod \"downloads-7954f5f757-bqx75\" (UID: \"3992a1ef-5774-468c-9640-cd23218862cc\") " pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.723576 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxk6x\" (UniqueName: \"kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x\") pod \"console-f9d7485db-ncttr\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.742712 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84fks\" (UniqueName: \"kubernetes.io/projected/d2900468-bc28-42ef-8624-0e5b0a80f772-kube-api-access-84fks\") pod \"openshift-apiserver-operator-796bbdcf4f-bfddr\" (UID: \"d2900468-bc28-42ef-8624-0e5b0a80f772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.761398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsm67\" (UniqueName: \"kubernetes.io/projected/f0a6fc20-9a8f-4e97-8689-890f8a931a86-kube-api-access-lsm67\") pod \"apiserver-7bbb656c7d-phb5g\" (UID: \"f0a6fc20-9a8f-4e97-8689-890f8a931a86\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.781589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttvpj\" (UniqueName: \"kubernetes.io/projected/5ab22459-f606-452e-a71d-9f7e9212518d-kube-api-access-ttvpj\") pod \"cluster-samples-operator-665b6dd947-lq2vd\" (UID: \"5ab22459-f606-452e-a71d-9f7e9212518d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.802342 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l97jh\" (UniqueName: \"kubernetes.io/projected/a99b07fd-7413-4523-8812-f0c7fe540f6d-kube-api-access-l97jh\") pod \"apiserver-76f77b778f-n4rj2\" (UID: \"a99b07fd-7413-4523-8812-f0c7fe540f6d\") " pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.802610 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.820473 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxpp\" (UniqueName: \"kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp\") pod \"route-controller-manager-6576b87f9c-fs4gv\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.837946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.843329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqh4q\" (UniqueName: \"kubernetes.io/projected/22f4cece-ea69-4c25-b492-8d03d960353e-kube-api-access-fqh4q\") pod \"machine-api-operator-5694c8668f-q65jj\" (UID: \"22f4cece-ea69-4c25-b492-8d03d960353e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.847151 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.853398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ded8a8f-f67c-422c-9818-d2ac883d4026-images\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.867389 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.870660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.891403 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.901617 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.906714 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.919879 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ded8a8f-f67c-422c-9818-d2ac883d4026-proxy-tls\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.921554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.928218 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.948541 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.967400 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:23:49 crc kubenswrapper[4766]: I0129 11:23:49.987057 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.007335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.055398 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.164808 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.165316 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.166827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.168582 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.169264 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.169519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.170276 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.170651 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.170859 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.174233 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.177508 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.181511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a78c46d-38f6-4cae-9fa7-36adb60b921e-trusted-ca\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.181667 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a96ad2-50e7-4cc9-8070-185cb9d97774-config\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.186570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.186603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7a96ad2-50e7-4cc9-8070-185cb9d97774-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.187115 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a78c46d-38f6-4cae-9fa7-36adb60b921e-metrics-tls\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.187619 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.207817 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.226590 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.252015 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.258896 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bqx75"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.261990 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.266347 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/89eca04b-5abc-42ee-8878-094433bfe94b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.268840 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.289672 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.294138 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:23:50 crc kubenswrapper[4766]: W0129 11:23:50.299373 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0a6fc20_9a8f_4e97_8689_890f8a931a86.slice/crio-41788c9cf55f951591f5c43efbf89b632c776673d2ab46b422d2b957e71904b9 WatchSource:0}: Error finding container 41788c9cf55f951591f5c43efbf89b632c776673d2ab46b422d2b957e71904b9: Status 404 returned error can't find the container with id 41788c9cf55f951591f5c43efbf89b632c776673d2ab46b422d2b957e71904b9 Jan 29 11:23:50 crc kubenswrapper[4766]: W0129 11:23:50.299706 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3992a1ef_5774_468c_9640_cd23218862cc.slice/crio-131f62429646160b85d386e624b16264cb570b25e5be29f1cbb006ae322acc04 WatchSource:0}: Error finding container 131f62429646160b85d386e624b16264cb570b25e5be29f1cbb006ae322acc04: Status 404 returned error can't find the container with id 131f62429646160b85d386e624b16264cb570b25e5be29f1cbb006ae322acc04 Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.308735 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:23:50 crc kubenswrapper[4766]: W0129 11:23:50.309780 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf63d06_b674_4a7b_b896_5c78bc9d412d.slice/crio-5c6291896619dcd1a2a78864038ecfa35d04f183dc8d9a419729d8a37613fb5d WatchSource:0}: Error finding container 5c6291896619dcd1a2a78864038ecfa35d04f183dc8d9a419729d8a37613fb5d: Status 404 returned error can't find the container with id 5c6291896619dcd1a2a78864038ecfa35d04f183dc8d9a419729d8a37613fb5d Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.330909 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.336250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-serving-cert\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.336951 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.347149 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:23:50 crc kubenswrapper[4766]: W0129 11:23:50.356977 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod569bc384_3b96_4207_8d46_5a27bf7f21cd.slice/crio-1e5c00803a283cdc6130f856d8323901c2a7547e6dff9412f2efb558d407c1d0 WatchSource:0}: Error finding container 1e5c00803a283cdc6130f856d8323901c2a7547e6dff9412f2efb558d407c1d0: Status 404 returned error can't find the container with id 1e5c00803a283cdc6130f856d8323901c2a7547e6dff9412f2efb558d407c1d0 Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.365965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.368667 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.393159 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.403114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.405230 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.406678 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.406700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.427370 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.429897 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.437677 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-config\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.439570 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" event={"ID":"2cf63d06-b674-4a7b-b896-5c78bc9d412d","Type":"ContainerStarted","Data":"5c6291896619dcd1a2a78864038ecfa35d04f183dc8d9a419729d8a37613fb5d"} Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.448546 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q65jj"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.445721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/699d99f6-a65f-4822-80af-7f254046575f-srv-cert\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.445083 4766 request.go:700] Waited for 1.002581627s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.452862 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.457259 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" event={"ID":"f0a6fc20-9a8f-4e97-8689-890f8a931a86","Type":"ContainerStarted","Data":"41788c9cf55f951591f5c43efbf89b632c776673d2ab46b422d2b957e71904b9"} Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.470397 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.470838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ncttr" event={"ID":"569bc384-3b96-4207-8d46-5a27bf7f21cd","Type":"ContainerStarted","Data":"1e5c00803a283cdc6130f856d8323901c2a7547e6dff9412f2efb558d407c1d0"} Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.475158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bqx75" event={"ID":"3992a1ef-5774-468c-9640-cd23218862cc","Type":"ContainerStarted","Data":"131f62429646160b85d386e624b16264cb570b25e5be29f1cbb006ae322acc04"} Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.487078 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.492570 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.508818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n4rj2"] Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.509156 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.513504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-metrics-certs\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.528303 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.531082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-service-ca-bundle\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.548258 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.567758 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.588380 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.602538 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-default-certificate\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:50 crc kubenswrapper[4766]: W0129 11:23:50.604918 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda99b07fd_7413_4523_8812_f0c7fe540f6d.slice/crio-b1232af1bc389286d32107c81979a8690a85a201e9cad24ae08ffcba470186dd WatchSource:0}: Error finding container b1232af1bc389286d32107c81979a8690a85a201e9cad24ae08ffcba470186dd: Status 404 returned error can't find the container with id b1232af1bc389286d32107c81979a8690a85a201e9cad24ae08ffcba470186dd Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.607698 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.612374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-stats-auth\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.626920 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.647043 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.653975 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77154bcb-c7aa-4ee4-b8a4-3e599a303191-proxy-tls\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.668927 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.687664 4766 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.687783 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca podName:72cf9723-cba4-4f3b-90c4-c8b919e9b7a8 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.187759739 +0000 UTC m=+168.300152750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca") pod "marketplace-operator-79b997595-ztc7c" (UID: "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.687922 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.688021 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert podName:eaa7f58f-baeb-4ce9-8752-b1deb9ec5103 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.188001126 +0000 UTC m=+168.300394137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert") pod "packageserver-d55dfcdfc-ffkp6" (UID: "eaa7f58f-baeb-4ce9-8752-b1deb9ec5103") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689661 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689712 4766 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689769 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert podName:eaa7f58f-baeb-4ce9-8752-b1deb9ec5103 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.189744646 +0000 UTC m=+168.302137657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert") pod "packageserver-d55dfcdfc-ffkp6" (UID: "eaa7f58f-baeb-4ce9-8752-b1deb9ec5103") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert podName:ba79e1ab-c194-4c87-bd4f-45a4845b4d32 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.189804858 +0000 UTC m=+168.302197939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-fr4mj" (UID: "ba79e1ab-c194-4c87-bd4f-45a4845b4d32") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689838 4766 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689870 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config podName:ba79e1ab-c194-4c87-bd4f-45a4845b4d32 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.189863959 +0000 UTC m=+168.302256970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config") pod "kube-storage-version-migrator-operator-b67b599dd-fr4mj" (UID: "ba79e1ab-c194-4c87-bd4f-45a4845b4d32") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689872 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689891 4766 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689919 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert podName:089ac99c-bbe9-48d2-81fe-a021c3c218a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.18990409 +0000 UTC m=+168.302297191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-kt5b7" (UID: "089ac99c-bbe9-48d2-81fe-a021c3c218a4") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689941 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics podName:72cf9723-cba4-4f3b-90c4-c8b919e9b7a8 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.189932091 +0000 UTC m=+168.302325222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics") pod "marketplace-operator-79b997595-ztc7c" (UID: "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689982 4766 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.689988 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.690017 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls podName:2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.190008573 +0000 UTC m=+168.302401664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls") pod "dns-default-cnns4" (UID: "2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.690011 4766 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.690028 4766 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.690038 4766 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.690106 4766 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.692940 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert podName:c9f47551-239d-46a0-857b-34eec9853f06 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.190890879 +0000 UTC m=+168.303283890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert") pod "olm-operator-6b444d44fb-x28zg" (UID: "c9f47551-239d-46a0-857b-34eec9853f06") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.692999 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle podName:fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.192979668 +0000 UTC m=+168.305372679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle") pod "service-ca-9c57cc56f-zxdzm" (UID: "fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.693019 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume podName:3cfb993e-e305-4ad1-81f6-349bc2544e60 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.193010029 +0000 UTC m=+168.305403050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume") pod "collect-profiles-29494755-ff4r9" (UID: "3cfb993e-e305-4ad1-81f6-349bc2544e60") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.693049 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key podName:fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5 nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.193027449 +0000 UTC m=+168.305420460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key") pod "service-ca-9c57cc56f-zxdzm" (UID: "fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: E0129 11:23:50.693067 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume podName:2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae nodeName:}" failed. No retries permitted until 2026-01-29 11:23:51.19305819 +0000 UTC m=+168.305451201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume") pod "dns-default-cnns4" (UID: "2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.705954 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.709885 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.726718 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.746616 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.768049 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.787641 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.807550 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.828085 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.847279 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.871729 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.887468 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.908132 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.928114 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.947005 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.967383 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:23:50 crc kubenswrapper[4766]: I0129 11:23:50.987979 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.007277 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.026687 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.047065 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.069659 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.087873 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.107613 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.146272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l5ps\" (UniqueName: \"kubernetes.io/projected/e0d3c828-9641-4030-acfc-282a4dadcf1d-kube-api-access-8l5ps\") pod \"authentication-operator-69f744f599-vnx7s\" (UID: \"e0d3c828-9641-4030-acfc-282a4dadcf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.165421 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bsn\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-kube-api-access-d2bsn\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.182307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skv2q\" (UniqueName: \"kubernetes.io/projected/53d8ed7c-3414-4b1f-98c0-5b577dbc5b31-kube-api-access-skv2q\") pod \"openshift-config-operator-7777fb866f-qkwt7\" (UID: \"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.185071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.189630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.189705 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.189899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.189941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.189989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.190057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.190186 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.192004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.193762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-metrics-tls\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.193812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.194463 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-webhook-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.197217 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-apiservice-cert\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.202978 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeaa3b58-307f-43e3-b8be-f5a93ae40bdc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6mbb9\" (UID: \"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.208782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/089ac99c-bbe9-48d2-81fe-a021c3c218a4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.210400 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.223978 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8hf\" (UniqueName: \"kubernetes.io/projected/6ee06c09-afd6-4909-a722-2812c4c391b7-kube-api-access-vr8hf\") pod \"console-operator-58897d9998-xwtsb\" (UID: \"6ee06c09-afd6-4909-a722-2812c4c391b7\") " pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.232398 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.242166 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.244647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzd9l\" (UniqueName: \"kubernetes.io/projected/e7553566-6a69-4542-892b-bd74d3c8ac0e-kube-api-access-dzd9l\") pod \"machine-approver-56656f9798-mkpxk\" (UID: \"e7553566-6a69-4542-892b-bd74d3c8ac0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.254178 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.264446 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4mrt\" (UniqueName: \"kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt\") pod \"oauth-openshift-558db77b4-r9vtz\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.287692 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.292820 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.292888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.292983 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.293004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.293154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.293226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.293939 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-cabundle\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.294613 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.294756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-config-volume\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.298250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9f47551-239d-46a0-857b-34eec9853f06-srv-cert\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.298665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-signing-key\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.299905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.311453 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.331943 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.367763 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.386181 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.415748 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.429018 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.445735 4766 request.go:700] Waited for 1.908788485s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.448512 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.455442 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.461548 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.462910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.467637 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.477459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.483188 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bqx75" event={"ID":"3992a1ef-5774-468c-9640-cd23218862cc","Type":"ContainerStarted","Data":"0be8260cba8279db0c93236ca7106096debed7784643b6f1e3faf12f21a7ddb5"} Jan 29 11:23:51 crc kubenswrapper[4766]: W0129 11:23:51.483998 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53d8ed7c_3414_4b1f_98c0_5b577dbc5b31.slice/crio-5116e788ae859eeae5a1e4d95e1a8a7b21b6dc06c38cd7a66c0b3c41f49b0bad WatchSource:0}: Error finding container 5116e788ae859eeae5a1e4d95e1a8a7b21b6dc06c38cd7a66c0b3c41f49b0bad: Status 404 returned error can't find the container with id 5116e788ae859eeae5a1e4d95e1a8a7b21b6dc06c38cd7a66c0b3c41f49b0bad Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.485588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" event={"ID":"d2900468-bc28-42ef-8624-0e5b0a80f772","Type":"ContainerStarted","Data":"99114a2e88db5548b33013a74a8d9d2b7726da052dc3ed52c6aa5439114245f9"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.487890 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.489775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" event={"ID":"e7553566-6a69-4542-892b-bd74d3c8ac0e","Type":"ContainerStarted","Data":"fe17a29eb64a218d5ba9bddc85b1e943d8eccd3250a02bb85cb15ebeceb56821"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.492472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" event={"ID":"a99b07fd-7413-4523-8812-f0c7fe540f6d","Type":"ContainerStarted","Data":"b1232af1bc389286d32107c81979a8690a85a201e9cad24ae08ffcba470186dd"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.500736 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vnx7s"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.505303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" event={"ID":"2cf63d06-b674-4a7b-b896-5c78bc9d412d","Type":"ContainerStarted","Data":"c9ed418193af1a1da51ce2e39098ae9b1ab5ffe0809dc825165924034906ce9c"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.505960 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.507373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" event={"ID":"f0a6fc20-9a8f-4e97-8689-890f8a931a86","Type":"ContainerStarted","Data":"c44309ab6fddd1c1e51c1ba76dc685f1186e79bf02348add33154b2d7654a547"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.508840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" event={"ID":"f093c2f4-8a68-4d38-b957-21dd36402984","Type":"ContainerStarted","Data":"789284ba86ae621d070e8ac02e93c9c37a8dd53e9a8bf96804c1378015868a4c"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.508865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" event={"ID":"f093c2f4-8a68-4d38-b957-21dd36402984","Type":"ContainerStarted","Data":"ed10e7f650ff7f442fc6a3499d8c7693f70d810f602bb22f1b6b4aa1ab048d2f"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.510454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ncttr" event={"ID":"569bc384-3b96-4207-8d46-5a27bf7f21cd","Type":"ContainerStarted","Data":"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.511969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" event={"ID":"22f4cece-ea69-4c25-b492-8d03d960353e","Type":"ContainerStarted","Data":"bd0decf5d36f46b7ebe27ef92cf8e4bae68c887aa1f76671a5bce1d06b3f697a"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.512004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" event={"ID":"22f4cece-ea69-4c25-b492-8d03d960353e","Type":"ContainerStarted","Data":"63cf6a1c70b6818a66cdb54bdf09b6ca0b38214c92cc17df8b93603a8cff2d4e"} Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.513278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" event={"ID":"5ab22459-f606-452e-a71d-9f7e9212518d","Type":"ContainerStarted","Data":"36c1cf592a659236b438889b83882428373c355520caf339110c20c159604c60"} Jan 29 11:23:51 crc kubenswrapper[4766]: W0129 11:23:51.517708 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeaa3b58_307f_43e3_b8be_f5a93ae40bdc.slice/crio-56e450557efb7e6a96ff4053779acebb0dcc80db90f8af84042da7c03f8f0d0c WatchSource:0}: Error finding container 56e450557efb7e6a96ff4053779acebb0dcc80db90f8af84042da7c03f8f0d0c: Status 404 returned error can't find the container with id 56e450557efb7e6a96ff4053779acebb0dcc80db90f8af84042da7c03f8f0d0c Jan 29 11:23:51 crc kubenswrapper[4766]: W0129 11:23:51.520050 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0d3c828_9641_4030_acfc_282a4dadcf1d.slice/crio-f0a5e3b09c53a616af16815ae8e2ce7b4b4a306a6cc330818b63658d0a4d29c8 WatchSource:0}: Error finding container f0a5e3b09c53a616af16815ae8e2ce7b4b4a306a6cc330818b63658d0a4d29c8: Status 404 returned error can't find the container with id f0a5e3b09c53a616af16815ae8e2ce7b4b4a306a6cc330818b63658d0a4d29c8 Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.520664 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zj2l7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.520764 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.525223 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4gt\" (UniqueName: \"kubernetes.io/projected/1ded8a8f-f67c-422c-9818-d2ac883d4026-kube-api-access-6v4gt\") pod \"machine-config-operator-74547568cd-b6f2t\" (UID: \"1ded8a8f-f67c-422c-9818-d2ac883d4026\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.561688 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxkp\" (UniqueName: \"kubernetes.io/projected/9628ffe6-8bd9-40c2-82d9-d844078b7086-kube-api-access-phxkp\") pod \"dns-operator-744455d44c-7zssb\" (UID: \"9628ffe6-8bd9-40c2-82d9-d844078b7086\") " pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.562684 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b89e415-2430-4431-a579-fe555ba8771f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxg9v\" (UID: \"1b89e415-2430-4431-a579-fe555ba8771f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.590689 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcv9\" (UniqueName: \"kubernetes.io/projected/1e7de4ba-321d-4b46-b66b-cc1f437bd804-kube-api-access-mqcv9\") pod \"etcd-operator-b45778765-x9vrs\" (UID: \"1e7de4ba-321d-4b46-b66b-cc1f437bd804\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.606477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df181b4a-3b70-456c-9fd8-c1d03bee42f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mjdnh\" (UID: \"df181b4a-3b70-456c-9fd8-c1d03bee42f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.616562 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.627184 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.632201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfm7x\" (UniqueName: \"kubernetes.io/projected/7d3ca5b4-1aba-4925-a04a-8d0d3ee29328-kube-api-access-sfm7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-gj52d\" (UID: \"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.641652 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.652552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mrl2\" (UniqueName: \"kubernetes.io/projected/2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae-kube-api-access-5mrl2\") pod \"dns-default-cnns4\" (UID: \"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae\") " pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.655321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.664155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.665969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmf2l\" (UniqueName: \"kubernetes.io/projected/06681e16-3449-44aa-9680-1f1566bca8f3-kube-api-access-cmf2l\") pod \"csi-hostpathplugin-crmjf\" (UID: \"06681e16-3449-44aa-9680-1f1566bca8f3\") " pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.687054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.712363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs67p\" (UniqueName: \"kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p\") pod \"marketplace-operator-79b997595-ztc7c\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.729334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5v6\" (UniqueName: \"kubernetes.io/projected/eaa7f58f-baeb-4ce9-8752-b1deb9ec5103-kube-api-access-qm5v6\") pod \"packageserver-d55dfcdfc-ffkp6\" (UID: \"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.733870 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwtsb"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.756946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.780460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcpbg\" (UniqueName: \"kubernetes.io/projected/b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef-kube-api-access-bcpbg\") pod \"control-plane-machine-set-operator-78cbb6b69f-kgqmk\" (UID: \"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.780921 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.783834 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b47f\" (UniqueName: \"kubernetes.io/projected/0a78c46d-38f6-4cae-9fa7-36adb60b921e-kube-api-access-4b47f\") pod \"ingress-operator-5b745b69d9-2st4g\" (UID: \"0a78c46d-38f6-4cae-9fa7-36adb60b921e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.792999 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.798771 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7a96ad2-50e7-4cc9-8070-185cb9d97774-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9wgx9\" (UID: \"f7a96ad2-50e7-4cc9-8070-185cb9d97774\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.813256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.815039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4846\" (UniqueName: \"kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846\") pod \"collect-profiles-29494755-ff4r9\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.834594 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.853210 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnnlh\" (UniqueName: \"kubernetes.io/projected/89eca04b-5abc-42ee-8878-094433bfe94b-kube-api-access-wnnlh\") pod \"multus-admission-controller-857f4d67dd-d8l67\" (UID: \"89eca04b-5abc-42ee-8878-094433bfe94b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.865145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zslq7\" (UniqueName: \"kubernetes.io/projected/77154bcb-c7aa-4ee4-b8a4-3e599a303191-kube-api-access-zslq7\") pod \"machine-config-controller-84d6567774-mfbhv\" (UID: \"77154bcb-c7aa-4ee4-b8a4-3e599a303191\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.874027 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lqg\" (UniqueName: \"kubernetes.io/projected/699d99f6-a65f-4822-80af-7f254046575f-kube-api-access-58lqg\") pod \"catalog-operator-68c6474976-9wk84\" (UID: \"699d99f6-a65f-4822-80af-7f254046575f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.878066 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.903803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vw88\" (UniqueName: \"kubernetes.io/projected/7d34fd54-1d88-420a-a4cc-405e9d3900ab-kube-api-access-2vw88\") pod \"migrator-59844c95c7-n8m6h\" (UID: \"7d34fd54-1d88-420a-a4cc-405e9d3900ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.913374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwffn\" (UniqueName: \"kubernetes.io/projected/ba79e1ab-c194-4c87-bd4f-45a4845b4d32-kube-api-access-kwffn\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr4mj\" (UID: \"ba79e1ab-c194-4c87-bd4f-45a4845b4d32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.931601 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh"] Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.941180 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvz4\" (UniqueName: \"kubernetes.io/projected/089ac99c-bbe9-48d2-81fe-a021c3c218a4-kube-api-access-2tvz4\") pod \"package-server-manager-789f6589d5-kt5b7\" (UID: \"089ac99c-bbe9-48d2-81fe-a021c3c218a4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.960372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq46f\" (UniqueName: \"kubernetes.io/projected/5b311e5d-45ff-425d-b8af-d4cd47ccd4ea-kube-api-access-lq46f\") pod \"service-ca-operator-777779d784-jjfht\" (UID: \"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.974492 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.978776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzx4\" (UniqueName: \"kubernetes.io/projected/c9f47551-239d-46a0-857b-34eec9853f06-kube-api-access-mkzx4\") pod \"olm-operator-6b444d44fb-x28zg\" (UID: \"c9f47551-239d-46a0-857b-34eec9853f06\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:51 crc kubenswrapper[4766]: W0129 11:23:51.987224 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf181b4a_3b70_456c_9fd8_c1d03bee42f5.slice/crio-57cfec26c29d29c540f7c462abe10357837285203dec294ffd55eb460c5f8749 WatchSource:0}: Error finding container 57cfec26c29d29c540f7c462abe10357837285203dec294ffd55eb460c5f8749: Status 404 returned error can't find the container with id 57cfec26c29d29c540f7c462abe10357837285203dec294ffd55eb460c5f8749 Jan 29 11:23:51 crc kubenswrapper[4766]: I0129 11:23:51.993973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w82jt\" (UniqueName: \"kubernetes.io/projected/faf12f57-ca0e-47d4-bb9c-06b758d0ebbc-kube-api-access-w82jt\") pod \"router-default-5444994796-h54ww\" (UID: \"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc\") " pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.002233 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.009018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.013600 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhc5\" (UniqueName: \"kubernetes.io/projected/fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5-kube-api-access-dkhc5\") pod \"service-ca-9c57cc56f-zxdzm\" (UID: \"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.020868 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.029572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.035887 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.041620 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.049973 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.071400 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.074576 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.097075 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.099447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.115422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132180 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132352 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.132439 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpf9\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.132835 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:52.632819952 +0000 UTC m=+169.745212963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.166081 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.233689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.233982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14ea2166-db98-4431-a9e8-23226dc0ee79-cert\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-certs\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpf9\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvwv\" (UniqueName: \"kubernetes.io/projected/19c8d347-e369-4279-9564-c87b1a996261-kube-api-access-2dvwv\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234820 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wjm\" (UniqueName: \"kubernetes.io/projected/14ea2166-db98-4431-a9e8-23226dc0ee79-kube-api-access-w5wjm\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.234904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.235012 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.235124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-node-bootstrap-token\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.235216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.237971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.238083 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.238998 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:52.738967709 +0000 UTC m=+169.851360890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.239655 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.242743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.245871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.246180 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.291289 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpf9\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.292559 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x9vrs"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.342501 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7zssb"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.342902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvwv\" (UniqueName: \"kubernetes.io/projected/19c8d347-e369-4279-9564-c87b1a996261-kube-api-access-2dvwv\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.343037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.343068 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wjm\" (UniqueName: \"kubernetes.io/projected/14ea2166-db98-4431-a9e8-23226dc0ee79-kube-api-access-w5wjm\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.343120 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-node-bootstrap-token\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.343181 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14ea2166-db98-4431-a9e8-23226dc0ee79-cert\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.343224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-certs\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.343585 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:52.843559261 +0000 UTC m=+169.955952442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.350229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-node-bootstrap-token\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.351122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.352729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14ea2166-db98-4431-a9e8-23226dc0ee79-cert\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.366632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/19c8d347-e369-4279-9564-c87b1a996261-certs\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.370016 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wjm\" (UniqueName: \"kubernetes.io/projected/14ea2166-db98-4431-a9e8-23226dc0ee79-kube-api-access-w5wjm\") pod \"ingress-canary-kgjkl\" (UID: \"14ea2166-db98-4431-a9e8-23226dc0ee79\") " pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.396313 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvwv\" (UniqueName: \"kubernetes.io/projected/19c8d347-e369-4279-9564-c87b1a996261-kube-api-access-2dvwv\") pod \"machine-config-server-bkp45\" (UID: \"19c8d347-e369-4279-9564-c87b1a996261\") " pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.412177 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.423918 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.444293 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.444612 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kgjkl" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.444641 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:52.944621413 +0000 UTC m=+170.057014424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.451166 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bkp45" Jan 29 11:23:52 crc kubenswrapper[4766]: W0129 11:23:52.505834 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9628ffe6_8bd9_40c2_82d9_d844078b7086.slice/crio-bdf7f3564a35edded68a5392f5e41f43e06f44332d23f2c2738ef8af320325b4 WatchSource:0}: Error finding container bdf7f3564a35edded68a5392f5e41f43e06f44332d23f2c2738ef8af320325b4: Status 404 returned error can't find the container with id bdf7f3564a35edded68a5392f5e41f43e06f44332d23f2c2738ef8af320325b4 Jan 29 11:23:52 crc kubenswrapper[4766]: W0129 11:23:52.508012 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b89e415_2430_4431_a579_fe555ba8771f.slice/crio-8e87e0490fff30f42949fd7b85eb45b34112fdcefc0c9eb86901bfae71425cf4 WatchSource:0}: Error finding container 8e87e0490fff30f42949fd7b85eb45b34112fdcefc0c9eb86901bfae71425cf4: Status 404 returned error can't find the container with id 8e87e0490fff30f42949fd7b85eb45b34112fdcefc0c9eb86901bfae71425cf4 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.551402 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.552133 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.052113738 +0000 UTC m=+170.164506749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.559704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.635059 4766 generic.go:334] "Generic (PLEG): container finished" podID="f0a6fc20-9a8f-4e97-8689-890f8a931a86" containerID="c44309ab6fddd1c1e51c1ba76dc685f1186e79bf02348add33154b2d7654a547" exitCode=0 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.636634 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.636714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" event={"ID":"f0a6fc20-9a8f-4e97-8689-890f8a931a86","Type":"ContainerDied","Data":"c44309ab6fddd1c1e51c1ba76dc685f1186e79bf02348add33154b2d7654a547"} Jan 29 11:23:52 crc kubenswrapper[4766]: W0129 11:23:52.638141 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72cf9723_cba4_4f3b_90c4_c8b919e9b7a8.slice/crio-41d6d9c1c1c95cdb4096f171331d6a959b622d98890a28f387570ac099e40b89 WatchSource:0}: Error finding container 41d6d9c1c1c95cdb4096f171331d6a959b622d98890a28f387570ac099e40b89: Status 404 returned error can't find the container with id 41d6d9c1c1c95cdb4096f171331d6a959b622d98890a28f387570ac099e40b89 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.641191 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cnns4"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.652984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.653564 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.153537719 +0000 UTC m=+170.265930730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.676041 4766 generic.go:334] "Generic (PLEG): container finished" podID="a99b07fd-7413-4523-8812-f0c7fe540f6d" containerID="107418a00fd68e5b9d12b7969fa1f0c82929d6b09f86840c8cbff15ed74fab03" exitCode=0 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.676719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" event={"ID":"a99b07fd-7413-4523-8812-f0c7fe540f6d","Type":"ContainerDied","Data":"107418a00fd68e5b9d12b7969fa1f0c82929d6b09f86840c8cbff15ed74fab03"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.680216 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" event={"ID":"a7315b30-c300-4afe-b798-de15fe9e9cc8","Type":"ContainerStarted","Data":"e5e0b62317a8747803c473fa1ee07f9e73b4bd6ed99dfbf5eeb903eef18d24be"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.683103 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-crmjf"] Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.683137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" event={"ID":"e0d3c828-9641-4030-acfc-282a4dadcf1d","Type":"ContainerStarted","Data":"52fe7ae7c28713ec12b727e78e3cf49d4c0e759fee39bbafce0fcd96350b81ca"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.683241 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" event={"ID":"e0d3c828-9641-4030-acfc-282a4dadcf1d","Type":"ContainerStarted","Data":"f0a5e3b09c53a616af16815ae8e2ce7b4b4a306a6cc330818b63658d0a4d29c8"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.686707 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d"] Jan 29 11:23:52 crc kubenswrapper[4766]: W0129 11:23:52.729203 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa7f58f_baeb_4ce9_8752_b1deb9ec5103.slice/crio-762de9444ff3486c53d8e705f8b31ed9bb05c2a0735df1584c43dd1d8889a201 WatchSource:0}: Error finding container 762de9444ff3486c53d8e705f8b31ed9bb05c2a0735df1584c43dd1d8889a201: Status 404 returned error can't find the container with id 762de9444ff3486c53d8e705f8b31ed9bb05c2a0735df1584c43dd1d8889a201 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.747216 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" event={"ID":"1e7de4ba-321d-4b46-b66b-cc1f437bd804","Type":"ContainerStarted","Data":"4cccefea12316ae0307649b51ea46b65a32a416466c0171590c5e82e7a0c232d"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.754618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.756294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.256239948 +0000 UTC m=+170.368633019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.760916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" event={"ID":"df181b4a-3b70-456c-9fd8-c1d03bee42f5","Type":"ContainerStarted","Data":"57cfec26c29d29c540f7c462abe10357837285203dec294ffd55eb460c5f8749"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.791605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" event={"ID":"d2900468-bc28-42ef-8624-0e5b0a80f772","Type":"ContainerStarted","Data":"0434d54f6e60b7110f2538eb94090a3c31eafbc0d0c7261da4b468bc080da718"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.796894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" event={"ID":"9628ffe6-8bd9-40c2-82d9-d844078b7086","Type":"ContainerStarted","Data":"bdf7f3564a35edded68a5392f5e41f43e06f44332d23f2c2738ef8af320325b4"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.805258 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" event={"ID":"5ab22459-f606-452e-a71d-9f7e9212518d","Type":"ContainerStarted","Data":"f5419ef21b203dfbc6d87f8b74cec55c19b83b728dc9607fad5739d27e9aeb76"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.805352 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" event={"ID":"5ab22459-f606-452e-a71d-9f7e9212518d","Type":"ContainerStarted","Data":"b1bab4d3c7e7f0527477090635b68851a32e6080c9b85da018cb45bdd53af007"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.820848 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" event={"ID":"e7553566-6a69-4542-892b-bd74d3c8ac0e","Type":"ContainerStarted","Data":"def06bdf57a98a92bf51e7378573156f14a7522ebd2921b8726d658aba4c6b67"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.825941 4766 generic.go:334] "Generic (PLEG): container finished" podID="53d8ed7c-3414-4b1f-98c0-5b577dbc5b31" containerID="d556082a2bd195fe37f5a6e5710c87ca37431cd75246dbc39953f7349fe5f923" exitCode=0 Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.826068 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" event={"ID":"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31","Type":"ContainerDied","Data":"d556082a2bd195fe37f5a6e5710c87ca37431cd75246dbc39953f7349fe5f923"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.826115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" event={"ID":"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31","Type":"ContainerStarted","Data":"5116e788ae859eeae5a1e4d95e1a8a7b21b6dc06c38cd7a66c0b3c41f49b0bad"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.831500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" event={"ID":"6ee06c09-afd6-4909-a722-2812c4c391b7","Type":"ContainerStarted","Data":"e058961ba001b64d23cb0fde4da13e5c5dae80d7c6d3af649e0864347b47b601"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.831556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" event={"ID":"6ee06c09-afd6-4909-a722-2812c4c391b7","Type":"ContainerStarted","Data":"2c9fcb6f7584372353b745b386407d6873eb1baa06c3ac68522f7128afc28ec0"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.832909 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.835546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h54ww" event={"ID":"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc","Type":"ContainerStarted","Data":"3f98c60fa6c2327544e2f169cafa70a94772d02142891a2b4375ea7d78ac17af"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.840141 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" event={"ID":"1b89e415-2430-4431-a579-fe555ba8771f","Type":"ContainerStarted","Data":"8e87e0490fff30f42949fd7b85eb45b34112fdcefc0c9eb86901bfae71425cf4"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.855575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" event={"ID":"22f4cece-ea69-4c25-b492-8d03d960353e","Type":"ContainerStarted","Data":"ceb0c5832d79c9d5d88b938d6c2de4ba0a094f73cb8b5266d29b7d62dfdb2aad"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.856310 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.857789 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.357766493 +0000 UTC m=+170.470159514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.867037 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" event={"ID":"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc","Type":"ContainerStarted","Data":"2e1bf9192841f4fe99b713a9b08a8da210102b5b00fb1a73d143712c5d997679"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.867114 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" event={"ID":"eeaa3b58-307f-43e3-b8be-f5a93ae40bdc","Type":"ContainerStarted","Data":"56e450557efb7e6a96ff4053779acebb0dcc80db90f8af84042da7c03f8f0d0c"} Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.868755 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zj2l7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.868798 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.943157 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwtsb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.943252 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" podUID="6ee06c09-afd6-4909-a722-2812c4c391b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 11:23:52 crc kubenswrapper[4766]: I0129 11:23:52.970797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:52 crc kubenswrapper[4766]: E0129 11:23:52.980300 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.480281486 +0000 UTC m=+170.592674497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.037487 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.073807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.075109 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.575085069 +0000 UTC m=+170.687478090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.176455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.176856 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.6768366 +0000 UTC m=+170.789229611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.277807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.278486 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.778444918 +0000 UTC m=+170.890837929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.278669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.279379 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.779354914 +0000 UTC m=+170.891747925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.298205 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.298276 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9"] Jan 29 11:23:53 crc kubenswrapper[4766]: W0129 11:23:53.362269 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7a96ad2_50e7_4cc9_8070_185cb9d97774.slice/crio-7250cfd360824afe158b34545e638ffd4ddf32030ccab5b1e44ab38fd67e7863 WatchSource:0}: Error finding container 7250cfd360824afe158b34545e638ffd4ddf32030ccab5b1e44ab38fd67e7863: Status 404 returned error can't find the container with id 7250cfd360824afe158b34545e638ffd4ddf32030ccab5b1e44ab38fd67e7863 Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.396169 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.396600 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.896520484 +0000 UTC m=+171.008913495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.397057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.397540 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:53.897519273 +0000 UTC m=+171.009912284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.505317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.506006 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.005984336 +0000 UTC m=+171.118377347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.554774 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.573101 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vnx7s" podStartSLOduration=135.573065168 podStartE2EDuration="2m15.573065168s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:53.546998725 +0000 UTC m=+170.659391746" watchObservedRunningTime="2026-01-29 11:23:53.573065168 +0000 UTC m=+170.685458199" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.593226 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d8l67"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.613465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.614198 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.11416916 +0000 UTC m=+171.226562171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.700613 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zxdzm"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.714903 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.715284 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.215264773 +0000 UTC m=+171.327657784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.754809 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.770485 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jjfht"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.787918 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kgjkl"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.806766 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.809125 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podStartSLOduration=134.809099858 podStartE2EDuration="2m14.809099858s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:53.795608864 +0000 UTC m=+170.908001905" watchObservedRunningTime="2026-01-29 11:23:53.809099858 +0000 UTC m=+170.921492869" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.816982 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.817516 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.317463847 +0000 UTC m=+171.429856858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.846164 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.884170 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg"] Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.923942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.924160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.924224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.924259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.924305 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.424254792 +0000 UTC m=+171.536647873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.924382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.924867 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:53 crc kubenswrapper[4766]: E0129 11:23:53.926080 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.426068464 +0000 UTC m=+171.538461475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.929663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.948803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" event={"ID":"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef","Type":"ContainerStarted","Data":"cad060f4fc84ebf6d0a4c22d668b49f50df233e5531e9a5b71762a544029042c"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.964142 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ncttr" podStartSLOduration=134.964090158 podStartE2EDuration="2m14.964090158s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:53.948703259 +0000 UTC m=+171.061096280" watchObservedRunningTime="2026-01-29 11:23:53.964090158 +0000 UTC m=+171.076483179" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.975904 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.976392 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.980457 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" event={"ID":"a7315b30-c300-4afe-b798-de15fe9e9cc8","Type":"ContainerStarted","Data":"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.983060 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.983168 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-r9vtz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.983212 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.988735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" event={"ID":"1e7de4ba-321d-4b46-b66b-cc1f437bd804","Type":"ContainerStarted","Data":"4d731dc3901b3ee75d4a4e8b9abeed8a32307ccb816776607632ab64b7a34256"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.990670 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" event={"ID":"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328","Type":"ContainerStarted","Data":"ec3d2910744efa1a0aa9dd5aae938bf30eb9ee3d8bafd22674a324fc06ad3dea"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.993521 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerStarted","Data":"c862f98590c008134e2625f528cc31e05a05fa60a5b1d0e409b8ea4638f7a33d"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.993559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerStarted","Data":"41d6d9c1c1c95cdb4096f171331d6a959b622d98890a28f387570ac099e40b89"} Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.994082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:53 crc kubenswrapper[4766]: I0129 11:23:53.994225 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.002982 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ztc7c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.003083 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.008138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" event={"ID":"e7553566-6a69-4542-892b-bd74d3c8ac0e","Type":"ContainerStarted","Data":"643cf31f13e381ab20093534ce28efff9b0565d7c527f78ab1d46090a75ba56f"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.026307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.028995 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.528962647 +0000 UTC m=+171.641355658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.036857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" event={"ID":"f0a6fc20-9a8f-4e97-8689-890f8a931a86","Type":"ContainerStarted","Data":"6faf0a8779a626f1a75b3514f11f5eb0d8e67dc095603ac4bcfb2734c7edeb75"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.038775 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.042820 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" podStartSLOduration=135.042785922 podStartE2EDuration="2m15.042785922s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.008795132 +0000 UTC m=+171.121188153" watchObservedRunningTime="2026-01-29 11:23:54.042785922 +0000 UTC m=+171.155178933" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.044078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84"] Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.045431 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.047508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" event={"ID":"0a78c46d-38f6-4cae-9fa7-36adb60b921e","Type":"ContainerStarted","Data":"2cf73b38a74eb2b2799fed229bf7d467b0b977c0f16ca996762974adb1c76c8a"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.047554 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" event={"ID":"0a78c46d-38f6-4cae-9fa7-36adb60b921e","Type":"ContainerStarted","Data":"4626cd58e91579949fea31a5a7fb0963115d6f6e3f5faeb55d6422ee332e37c5"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.052762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" event={"ID":"06681e16-3449-44aa-9680-1f1566bca8f3","Type":"ContainerStarted","Data":"7c8e1e38ad0b2200a2df2c27df71b8a50b31f5d960856e33cc7c5a2b8a843c28"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.056394 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lq2vd" podStartSLOduration=136.056370029 podStartE2EDuration="2m16.056370029s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.054204727 +0000 UTC m=+171.166597738" watchObservedRunningTime="2026-01-29 11:23:54.056370029 +0000 UTC m=+171.168763040" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.059027 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv"] Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.074517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bqx75" podStartSLOduration=135.074468105 podStartE2EDuration="2m15.074468105s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.071797429 +0000 UTC m=+171.184190440" watchObservedRunningTime="2026-01-29 11:23:54.074468105 +0000 UTC m=+171.186861116" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.079916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" event={"ID":"7d34fd54-1d88-420a-a4cc-405e9d3900ab","Type":"ContainerStarted","Data":"b7c7f723eabd9694848b0305c9b3fe07b11362ab815f05798e84c142c06eccd3"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.095192 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfddr" podStartSLOduration=136.095172495 podStartE2EDuration="2m16.095172495s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.094541477 +0000 UTC m=+171.206934498" watchObservedRunningTime="2026-01-29 11:23:54.095172495 +0000 UTC m=+171.207565506" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.106523 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" event={"ID":"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103","Type":"ContainerStarted","Data":"762de9444ff3486c53d8e705f8b31ed9bb05c2a0735df1584c43dd1d8889a201"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.109047 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.127586 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podStartSLOduration=135.127565749 podStartE2EDuration="2m15.127565749s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.12515044 +0000 UTC m=+171.237543451" watchObservedRunningTime="2026-01-29 11:23:54.127565749 +0000 UTC m=+171.239958760" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.127756 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ffkp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.130164 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" podUID="eaa7f58f-baeb-4ce9-8752-b1deb9ec5103" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.131758 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.132739 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.135212 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.635192166 +0000 UTC m=+171.747585257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.147584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.148518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cnns4" event={"ID":"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae","Type":"ContainerStarted","Data":"0c1b6b30e63a4961c1e922284e47e62d70b9365fa69871b270055fda12d3faee"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.164780 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" event={"ID":"f7a96ad2-50e7-4cc9-8070-185cb9d97774","Type":"ContainerStarted","Data":"7250cfd360824afe158b34545e638ffd4ddf32030ccab5b1e44ab38fd67e7863"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.173473 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h54ww" event={"ID":"faf12f57-ca0e-47d4-bb9c-06b758d0ebbc","Type":"ContainerStarted","Data":"c7a0c1971e8d7973a3956167ae5ad09ec2aa4bbefe3cb3a48b789bc9ac9890f5"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.183456 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6mbb9" podStartSLOduration=135.183434122 podStartE2EDuration="2m15.183434122s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.182968189 +0000 UTC m=+171.295361200" watchObservedRunningTime="2026-01-29 11:23:54.183434122 +0000 UTC m=+171.295827143" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.191491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" event={"ID":"df181b4a-3b70-456c-9fd8-c1d03bee42f5","Type":"ContainerStarted","Data":"391bb9b48c1d2ccd94de4f915d6d4e026c4dfc8c66203f0219ce3f6a047ac64f"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.199042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" event={"ID":"089ac99c-bbe9-48d2-81fe-a021c3c218a4","Type":"ContainerStarted","Data":"8488f27518bebda4bd99355725e0d8d57d37e33bd8571224ced323178a82ed37"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.201077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bkp45" event={"ID":"19c8d347-e369-4279-9564-c87b1a996261","Type":"ContainerStarted","Data":"c5b4059f2cb8ee76a80f740605622807d1cd311cd4b9c32ea3b1da42f5619567"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.202317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" event={"ID":"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5","Type":"ContainerStarted","Data":"be5317b71862fa21aa9c526affdc1b884ef661bb62fa8a6d423be6e93d80743b"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.203156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" event={"ID":"89eca04b-5abc-42ee-8878-094433bfe94b","Type":"ContainerStarted","Data":"23e80a40d960f161acd3060df1c99fc7c64da4889c81b5dd7fdb2cc82722d68d"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.204658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" event={"ID":"1b89e415-2430-4431-a579-fe555ba8771f","Type":"ContainerStarted","Data":"8956e0b4ea6a408078db7613f19131ae165146265ebfd4df9547b4999fb7c2db"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.210347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" event={"ID":"1ded8a8f-f67c-422c-9818-d2ac883d4026","Type":"ContainerStarted","Data":"a9015718a8ae43b7fb18faa8a1ef360447d6937b13c36cb7963e0602ff8ab3bb"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.210400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" event={"ID":"1ded8a8f-f67c-422c-9818-d2ac883d4026","Type":"ContainerStarted","Data":"4c87490ecb51206dc247af6a890a394f2637b962ee82d0421568c8ee0d52f3be"} Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.211577 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwtsb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.211660 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" podUID="6ee06c09-afd6-4909-a722-2812c4c391b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.218105 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q65jj" podStartSLOduration=135.21808182 podStartE2EDuration="2m15.21808182s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.215306171 +0000 UTC m=+171.327699202" watchObservedRunningTime="2026-01-29 11:23:54.21808182 +0000 UTC m=+171.330474831" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.236996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.238056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.240118 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.740088467 +0000 UTC m=+171.852481478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.301587 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-x9vrs" podStartSLOduration=135.301554919 podStartE2EDuration="2m15.301554919s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.274592961 +0000 UTC m=+171.386985982" watchObservedRunningTime="2026-01-29 11:23:54.301554919 +0000 UTC m=+171.413947930" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.327760 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" podStartSLOduration=135.327732935 podStartE2EDuration="2m15.327732935s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.32157879 +0000 UTC m=+171.433971821" watchObservedRunningTime="2026-01-29 11:23:54.327732935 +0000 UTC m=+171.440125956" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.346379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.348373 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.848355244 +0000 UTC m=+171.960748255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.421090 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" podStartSLOduration=135.421065087 podStartE2EDuration="2m15.421065087s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.41839074 +0000 UTC m=+171.530783751" watchObservedRunningTime="2026-01-29 11:23:54.421065087 +0000 UTC m=+171.533458098" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.421994 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" podStartSLOduration=136.421989313 podStartE2EDuration="2m16.421989313s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.365330398 +0000 UTC m=+171.477723429" watchObservedRunningTime="2026-01-29 11:23:54.421989313 +0000 UTC m=+171.534382324" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.447830 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.947806069 +0000 UTC m=+172.060199080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.447707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.458454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.453504 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podStartSLOduration=135.453480541 podStartE2EDuration="2m15.453480541s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.453305626 +0000 UTC m=+171.565698637" watchObservedRunningTime="2026-01-29 11:23:54.453480541 +0000 UTC m=+171.565873552" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.484778 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:54.984755313 +0000 UTC m=+172.097148314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.564786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.565213 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.065194496 +0000 UTC m=+172.177587507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.588087 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mkpxk" podStartSLOduration=136.588061088 podStartE2EDuration="2m16.588061088s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.572142974 +0000 UTC m=+171.684535985" watchObservedRunningTime="2026-01-29 11:23:54.588061088 +0000 UTC m=+171.700454099" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.590285 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mjdnh" podStartSLOduration=135.590275872 podStartE2EDuration="2m15.590275872s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.500723408 +0000 UTC m=+171.613116419" watchObservedRunningTime="2026-01-29 11:23:54.590275872 +0000 UTC m=+171.702668883" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.628211 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" podStartSLOduration=135.628184802 podStartE2EDuration="2m15.628184802s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.628046008 +0000 UTC m=+171.740439019" watchObservedRunningTime="2026-01-29 11:23:54.628184802 +0000 UTC m=+171.740577813" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.665072 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxg9v" podStartSLOduration=135.665047413 podStartE2EDuration="2m15.665047413s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.661888693 +0000 UTC m=+171.774281704" watchObservedRunningTime="2026-01-29 11:23:54.665047413 +0000 UTC m=+171.777440434" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.669674 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.670132 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.170112698 +0000 UTC m=+172.282505709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.753592 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h54ww" podStartSLOduration=135.753566347 podStartE2EDuration="2m15.753566347s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.713975909 +0000 UTC m=+171.826368920" watchObservedRunningTime="2026-01-29 11:23:54.753566347 +0000 UTC m=+171.865959358" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.772933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.773299 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.273277409 +0000 UTC m=+172.385670410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.784614 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" podStartSLOduration=135.784588882 podStartE2EDuration="2m15.784588882s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.755129612 +0000 UTC m=+171.867522633" watchObservedRunningTime="2026-01-29 11:23:54.784588882 +0000 UTC m=+171.896981893" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.804733 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.805463 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.822285 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bkp45" podStartSLOduration=5.822262556 podStartE2EDuration="5.822262556s" podCreationTimestamp="2026-01-29 11:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.786733733 +0000 UTC m=+171.899126744" watchObservedRunningTime="2026-01-29 11:23:54.822262556 +0000 UTC m=+171.934655567" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.831153 4766 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-phb5g container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.831230 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" podUID="f0a6fc20-9a8f-4e97-8689-890f8a931a86" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.839100 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" podStartSLOduration=135.839075276 podStartE2EDuration="2m15.839075276s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:54.81994388 +0000 UTC m=+171.932336901" watchObservedRunningTime="2026-01-29 11:23:54.839075276 +0000 UTC m=+171.951468277" Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.878605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.879036 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.379021885 +0000 UTC m=+172.491414896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:54 crc kubenswrapper[4766]: I0129 11:23:54.983263 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:54 crc kubenswrapper[4766]: E0129 11:23:54.983694 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.483672339 +0000 UTC m=+172.596065350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.047146 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.050057 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.050131 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.085194 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.085556 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.585540333 +0000 UTC m=+172.697933344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.200177 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.200724 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.700701597 +0000 UTC m=+172.813094618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.313186 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.328850 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" event={"ID":"7d34fd54-1d88-420a-a4cc-405e9d3900ab","Type":"ContainerStarted","Data":"8527186db9bc955945b13124acc22baca427b0709b178ab49c9f0e66b74e5f17"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.331488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" event={"ID":"eaa7f58f-baeb-4ce9-8752-b1deb9ec5103","Type":"ContainerStarted","Data":"46b2774cf36ced3d966f6a99fb83a449b209789e9daca381978c3c88e76c2e7a"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.336179 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ffkp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.336280 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" podUID="eaa7f58f-baeb-4ce9-8752-b1deb9ec5103" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.336635 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.836605062 +0000 UTC m=+172.948998073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.339008 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bkp45" event={"ID":"19c8d347-e369-4279-9564-c87b1a996261","Type":"ContainerStarted","Data":"d1544cdc6aaa32677fb32af2660b411f5e79c79be3da7f024de37fe4359f63f7"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.390151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" event={"ID":"c9f47551-239d-46a0-857b-34eec9853f06","Type":"ContainerStarted","Data":"cbc5e470d8abd76b4bd3ff24564ef13f2fda13c15bfe95ea5267a26aea577b88"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.391670 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" event={"ID":"c9f47551-239d-46a0-857b-34eec9853f06","Type":"ContainerStarted","Data":"852d681d6f65c1eb8323803041ade6bde5ab9c68af8e0d2a3b10db721c5394ba"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.391815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:23:55 crc kubenswrapper[4766]: W0129 11:23:55.406692 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-60b9401c9c75b7fde442091e08ec9b1fc5a428d5ac8388957d0b434c002ddb9a WatchSource:0}: Error finding container 60b9401c9c75b7fde442091e08ec9b1fc5a428d5ac8388957d0b434c002ddb9a: Status 404 returned error can't find the container with id 60b9401c9c75b7fde442091e08ec9b1fc5a428d5ac8388957d0b434c002ddb9a Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.407092 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x28zg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.407143 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" podUID="c9f47551-239d-46a0-857b-34eec9853f06" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.443954 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.445071 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:55.945038804 +0000 UTC m=+173.057431875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.446621 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" event={"ID":"a99b07fd-7413-4523-8812-f0c7fe540f6d","Type":"ContainerStarted","Data":"c51158a1d81f10466dfc06b65c76a21b15eccd5cc69c26b529ec324d4bac7827"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.513227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"045a09504d43d1a4f7ee37a3d27e4029864c8222a365324f51978d197ce8ac9f"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.522288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"333949e5b4dc7538b7eee6f23e4829cd2b834a7ffff36f9e4dce234bf347c209"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.541544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" event={"ID":"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea","Type":"ContainerStarted","Data":"80efe771b3f4a9feef33a7f970300df871b4fa0ed911288dc80cb8cd65ae5cd3"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.545363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.547130 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.047113244 +0000 UTC m=+173.159506255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.571893 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" event={"ID":"53d8ed7c-3414-4b1f-98c0-5b577dbc5b31","Type":"ContainerStarted","Data":"524f23cf4ff23fba08ad0e3be1f4984997690e6e119e5e12108e00f804d96afc"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.584751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" event={"ID":"9628ffe6-8bd9-40c2-82d9-d844078b7086","Type":"ContainerStarted","Data":"1fa5bdf7b9254a821e6d7c707378826e75ca1c2d7367f74eb5cdbc290d6fac48"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.599894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b6f2t" event={"ID":"1ded8a8f-f67c-422c-9818-d2ac883d4026","Type":"ContainerStarted","Data":"d0afcddc62d4a9b0e43694273d7b8cadac8737bd8c635f3a8ad13dbded170635"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.623363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" event={"ID":"f7a96ad2-50e7-4cc9-8070-185cb9d97774","Type":"ContainerStarted","Data":"3903bc0db5ed87aec934340f22c2f6d1c91f02ae79d6deebe426cf48140ca169"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.648381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.649825 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.149802002 +0000 UTC m=+173.262195023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.658831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gj52d" event={"ID":"7d3ca5b4-1aba-4925-a04a-8d0d3ee29328","Type":"ContainerStarted","Data":"9688613f9bf31f27060986d9472b4b7e85c026de006baf865267cdfe61fa4eb3"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.669016 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" event={"ID":"ba79e1ab-c194-4c87-bd4f-45a4845b4d32","Type":"ContainerStarted","Data":"68aa8ad9e926f29036b51d9fe2a63082bdf9a9decc8a5e624bcfda1e4009072f"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.690648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" event={"ID":"089ac99c-bbe9-48d2-81fe-a021c3c218a4","Type":"ContainerStarted","Data":"112817e2b10beddfc93b769f954e3c0027c935b50b12871f3be17a34be277a90"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.738326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cnns4" event={"ID":"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae","Type":"ContainerStarted","Data":"89382fe72f7375b735c7c2ae0fe1cdf8c7903d4d6ffdaea24375b6232601966a"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.751826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.764875 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.264848603 +0000 UTC m=+173.377241624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.775986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" event={"ID":"b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef","Type":"ContainerStarted","Data":"d5a6e235c0081e9391b4406f840cd2f89ac39b9af124fff0b03215c083b14ab4"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.812629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" event={"ID":"699d99f6-a65f-4822-80af-7f254046575f","Type":"ContainerStarted","Data":"0af3b8d57f9a2ee00fdfcb72e23a27a87f5a2adc2e42b1f716d7108d9d66f5d0"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.812695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" event={"ID":"699d99f6-a65f-4822-80af-7f254046575f","Type":"ContainerStarted","Data":"e93cd30af139eba4dc4445c9137a0ab26ca45e9fbe1a7520fcda70f038654b5b"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.813270 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.815834 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9wk84 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.815954 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" podUID="699d99f6-a65f-4822-80af-7f254046575f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.828995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" event={"ID":"77154bcb-c7aa-4ee4-b8a4-3e599a303191","Type":"ContainerStarted","Data":"e8b2deb79166942ba47ccd85c0b1620afe072396a645c088af2392fd7315581b"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.850438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" event={"ID":"3cfb993e-e305-4ad1-81f6-349bc2544e60","Type":"ContainerStarted","Data":"c94fa72e9e11ff303d0e43ca27cb9b3db4a372d5279771b7dce50783145d6354"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.850508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" event={"ID":"3cfb993e-e305-4ad1-81f6-349bc2544e60","Type":"ContainerStarted","Data":"867e8a293f567b84226c1f7e52f48de7e291c81a4c144d872f2b53a9fdcf3dac"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.852558 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.853633 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.353597563 +0000 UTC m=+173.465990574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.880033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kgjkl" event={"ID":"14ea2166-db98-4431-a9e8-23226dc0ee79","Type":"ContainerStarted","Data":"59da566f65458207c130e394ebf9b8ed7daf17b914b8d196b7d91b39a6b70997"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.880081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kgjkl" event={"ID":"14ea2166-db98-4431-a9e8-23226dc0ee79","Type":"ContainerStarted","Data":"bee83cbe0e1775b17f73223352342f4dad04642ed65a6af11589b51fbcd89faf"} Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.883514 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ztc7c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.883558 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.923223 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9wgx9" podStartSLOduration=136.923200858 podStartE2EDuration="2m16.923200858s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:55.923170287 +0000 UTC m=+173.035563298" watchObservedRunningTime="2026-01-29 11:23:55.923200858 +0000 UTC m=+173.035593869" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.957088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:55 crc kubenswrapper[4766]: I0129 11:23:55.965474 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" podStartSLOduration=136.965450002 podStartE2EDuration="2m16.965450002s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:55.963482326 +0000 UTC m=+173.075875337" watchObservedRunningTime="2026-01-29 11:23:55.965450002 +0000 UTC m=+173.077843003" Jan 29 11:23:55 crc kubenswrapper[4766]: E0129 11:23:55.967532 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.467502731 +0000 UTC m=+173.579895892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.010302 4766 csr.go:261] certificate signing request csr-2wtvl is approved, waiting to be issued Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.024854 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kgqmk" podStartSLOduration=137.024815865 podStartE2EDuration="2m17.024815865s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.01062349 +0000 UTC m=+173.123016501" watchObservedRunningTime="2026-01-29 11:23:56.024815865 +0000 UTC m=+173.137208876" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.031632 4766 csr.go:257] certificate signing request csr-2wtvl is issued Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.061551 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:23:56 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:23:56 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:23:56 crc kubenswrapper[4766]: healthz check failed Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.061635 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.071225 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.071572 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.571550738 +0000 UTC m=+173.683943749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.125087 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" podStartSLOduration=137.124738954 podStartE2EDuration="2m17.124738954s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.105014772 +0000 UTC m=+173.217407783" watchObservedRunningTime="2026-01-29 11:23:56.124738954 +0000 UTC m=+173.237131965" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.148986 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" podStartSLOduration=137.148951105 podStartE2EDuration="2m17.148951105s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.139597078 +0000 UTC m=+173.251990089" watchObservedRunningTime="2026-01-29 11:23:56.148951105 +0000 UTC m=+173.261344126" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.176305 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.177118 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" podStartSLOduration=137.176094659 podStartE2EDuration="2m17.176094659s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.174621176 +0000 UTC m=+173.287014207" watchObservedRunningTime="2026-01-29 11:23:56.176094659 +0000 UTC m=+173.288487660" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.178133 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.677175519 +0000 UTC m=+173.789568530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.235172 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kgjkl" podStartSLOduration=7.235149292 podStartE2EDuration="7.235149292s" podCreationTimestamp="2026-01-29 11:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.233267639 +0000 UTC m=+173.345660650" watchObservedRunningTime="2026-01-29 11:23:56.235149292 +0000 UTC m=+173.347542303" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.277838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.278247 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.778227871 +0000 UTC m=+173.890620892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.380066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.380626 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.88060647 +0000 UTC m=+173.992999481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.481353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.481765 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.981698842 +0000 UTC m=+174.094091853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.482142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.482566 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:56.982546236 +0000 UTC m=+174.094939247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.585211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.586087 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.086064798 +0000 UTC m=+174.198457809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.675965 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.688667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.689668 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.189649062 +0000 UTC m=+174.302042073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.790006 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.790220 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.290187148 +0000 UTC m=+174.402580159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.790346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.790766 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.290751954 +0000 UTC m=+174.403144965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.887077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr4mj" event={"ID":"ba79e1ab-c194-4c87-bd4f-45a4845b4d32","Type":"ContainerStarted","Data":"a0c54922a28af6e9425f7a723d885d2bffd56190d65f7d8584cacf6bf8496d3f"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.888682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" event={"ID":"5b311e5d-45ff-425d-b8af-d4cd47ccd4ea","Type":"ContainerStarted","Data":"507f431e13ece48f072aef02a8a5e27a67da22f2426ac23d77621fa40a5f4911"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.893934 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.894328 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.394283846 +0000 UTC m=+174.506676857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.896168 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cnns4" event={"ID":"2b2e1dd6-6871-4e8c-afee-d1a17f44f4ae","Type":"ContainerStarted","Data":"45b2d8ea7297f4379c2e457c967b81ec9dc9d4c0ee8a894c66d0c783888bb92c"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.896303 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cnns4" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.897892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" event={"ID":"06681e16-3449-44aa-9680-1f1566bca8f3","Type":"ContainerStarted","Data":"95e35db4f9620411107f587976c3abf55068a879a64422ddc53452882f34e50b"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.902156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" event={"ID":"0a78c46d-38f6-4cae-9fa7-36adb60b921e","Type":"ContainerStarted","Data":"e48328e59d7e24e80ddf425d03cffe5dc4090a18db4561a8b6104ced1e405eab"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.904267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" event={"ID":"77154bcb-c7aa-4ee4-b8a4-3e599a303191","Type":"ContainerStarted","Data":"1f09792ca8f24e25a1edc07a59fbeea01e04a7863d34390b0b562e565b12eb03"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.904497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" event={"ID":"77154bcb-c7aa-4ee4-b8a4-3e599a303191","Type":"ContainerStarted","Data":"a5a8b4792ea35596f0018d8d05926b191dc4f1ffc54ca0bdd4fff9a052247d53"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.906331 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8bf9ad6bd974ae66e248a2bb1ce5292a45e7a55efb5d4170da6c9c3337ae2a87"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.909788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" event={"ID":"089ac99c-bbe9-48d2-81fe-a021c3c218a4","Type":"ContainerStarted","Data":"e4a36c66689bae91e6f6b262e3c25a0740b9d8765fd15a1601cba28d5e3242d0"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.910113 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.912051 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" event={"ID":"fd30d3d9-6a0a-4e01-b78b-2c45f3eb10e5","Type":"ContainerStarted","Data":"fc1ad9cb579879c14db80e3eb5883b5a46891767f1931a22ceb513e4d845cc8e"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.914645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a58b80a16bd2cc208ab14f87a03ce61d31cba36d849ac6bc02f67b742c87ad84"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.920543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" event={"ID":"9628ffe6-8bd9-40c2-82d9-d844078b7086","Type":"ContainerStarted","Data":"8da2550631beef9b78817aa9bdc519497563e5dd3c13446234297893d7f39094"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.928922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8f0ecef32ac6ed54cae8cc32bdee17606c1206e2df4783d0e8a39942af7add4f"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.929230 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"60b9401c9c75b7fde442091e08ec9b1fc5a428d5ac8388957d0b434c002ddb9a"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.930019 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.935269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" event={"ID":"89eca04b-5abc-42ee-8878-094433bfe94b","Type":"ContainerStarted","Data":"ab674144f2cfe640eeb3611269b011fd2a7048b6fd5223e383a22eb35e799314"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.935564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" event={"ID":"89eca04b-5abc-42ee-8878-094433bfe94b","Type":"ContainerStarted","Data":"68aff6e4f6834a5c3f9475309e6d55ed3ff6dd66f43fbfb8ac44bdd1f4144ff5"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.942047 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" event={"ID":"a99b07fd-7413-4523-8812-f0c7fe540f6d","Type":"ContainerStarted","Data":"bab1c155c42635a54ebe34bb9eac01aee0315abad29a5741c0eb0a9df9757120"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.956877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" event={"ID":"7d34fd54-1d88-420a-a4cc-405e9d3900ab","Type":"ContainerStarted","Data":"673e33ea82eaf998e3163cc1318e4e10fdaab6ac2d5067849189d5e4e279d451"} Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.958583 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ffkp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.958624 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" podUID="eaa7f58f-baeb-4ce9-8752-b1deb9ec5103" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959008 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x28zg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959037 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" podUID="c9f47551-239d-46a0-857b-34eec9853f06" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959093 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ztc7c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959109 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959748 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9wk84 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.959828 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" podUID="699d99f6-a65f-4822-80af-7f254046575f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.969954 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkwt7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.970013 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" podUID="53d8ed7c-3414-4b1f-98c0-5b577dbc5b31" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.971546 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jjfht" podStartSLOduration=137.971534889 podStartE2EDuration="2m17.971534889s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:56.969591054 +0000 UTC m=+174.081984065" watchObservedRunningTime="2026-01-29 11:23:56.971534889 +0000 UTC m=+174.083927900" Jan 29 11:23:56 crc kubenswrapper[4766]: I0129 11:23:56.995664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:56 crc kubenswrapper[4766]: E0129 11:23:56.998679 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.498657612 +0000 UTC m=+174.611050623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.032996 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 11:18:55 +0000 UTC, rotation deadline is 2026-12-08 19:18:56.995481767 +0000 UTC Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.033298 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7519h54m59.962188167s for next certificate rotation Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.067834 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:23:57 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:23:57 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:23:57 crc kubenswrapper[4766]: healthz check failed Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.067963 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.069193 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2st4g" podStartSLOduration=138.069167513 podStartE2EDuration="2m18.069167513s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.067067023 +0000 UTC m=+174.179460044" watchObservedRunningTime="2026-01-29 11:23:57.069167513 +0000 UTC m=+174.181560524" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.098172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.100516 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.600491796 +0000 UTC m=+174.712884807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.114608 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" podStartSLOduration=138.114580288 podStartE2EDuration="2m18.114580288s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.112925091 +0000 UTC m=+174.225318122" watchObservedRunningTime="2026-01-29 11:23:57.114580288 +0000 UTC m=+174.226973299" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.158363 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n8m6h" podStartSLOduration=138.158338926 podStartE2EDuration="2m18.158338926s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.157326027 +0000 UTC m=+174.269719038" watchObservedRunningTime="2026-01-29 11:23:57.158338926 +0000 UTC m=+174.270731967" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.185734 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkwt7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.185818 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" podUID="53d8ed7c-3414-4b1f-98c0-5b577dbc5b31" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.188163 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkwt7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.188240 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" podUID="53d8ed7c-3414-4b1f-98c0-5b577dbc5b31" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.200444 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.200895 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.700878969 +0000 UTC m=+174.813271980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.246687 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zxdzm" podStartSLOduration=138.246663174 podStartE2EDuration="2m18.246663174s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.240172329 +0000 UTC m=+174.352565340" watchObservedRunningTime="2026-01-29 11:23:57.246663174 +0000 UTC m=+174.359056195" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.247229 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" podStartSLOduration=139.24722475 podStartE2EDuration="2m19.24722475s" podCreationTimestamp="2026-01-29 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.199650794 +0000 UTC m=+174.312043825" watchObservedRunningTime="2026-01-29 11:23:57.24722475 +0000 UTC m=+174.359617761" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.301708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.301949 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.801912739 +0000 UTC m=+174.914305750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.302206 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.302652 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.80263478 +0000 UTC m=+174.915027791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.305967 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cnns4" podStartSLOduration=8.305946164 podStartE2EDuration="8.305946164s" podCreationTimestamp="2026-01-29 11:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.280954112 +0000 UTC m=+174.393347133" watchObservedRunningTime="2026-01-29 11:23:57.305946164 +0000 UTC m=+174.418339175" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.368206 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d8l67" podStartSLOduration=138.368181339 podStartE2EDuration="2m18.368181339s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.3082626 +0000 UTC m=+174.420655621" watchObservedRunningTime="2026-01-29 11:23:57.368181339 +0000 UTC m=+174.480574350" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.404072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.404501 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:57.904479754 +0000 UTC m=+175.016872765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.421843 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfbhv" podStartSLOduration=138.421814258 podStartE2EDuration="2m18.421814258s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.383530057 +0000 UTC m=+174.495923078" watchObservedRunningTime="2026-01-29 11:23:57.421814258 +0000 UTC m=+174.534207269" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.506962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.507339 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.007324836 +0000 UTC m=+175.119717847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.518653 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7zssb" podStartSLOduration=138.518629539 podStartE2EDuration="2m18.518629539s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:23:57.466903894 +0000 UTC m=+174.579296915" watchObservedRunningTime="2026-01-29 11:23:57.518629539 +0000 UTC m=+174.631022550" Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.608875 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.609100 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.109048047 +0000 UTC m=+175.221441048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.609381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.609738 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.109729106 +0000 UTC m=+175.222122117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.711003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.711119 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.211095967 +0000 UTC m=+175.323488978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.711381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.711702 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.211693284 +0000 UTC m=+175.324086295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.813538 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.813761 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.313724913 +0000 UTC m=+175.426117924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.813891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.814239 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.314230897 +0000 UTC m=+175.426623908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.915833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.916082 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.41604325 +0000 UTC m=+175.528436261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.916220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:57 crc kubenswrapper[4766]: E0129 11:23:57.916766 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.416740939 +0000 UTC m=+175.529133940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.969752 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9wk84 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 29 11:23:57 crc kubenswrapper[4766]: I0129 11:23:57.969819 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" podUID="699d99f6-a65f-4822-80af-7f254046575f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.017907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.018331 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.518305945 +0000 UTC m=+175.630698956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.051267 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:23:58 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:23:58 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:23:58 crc kubenswrapper[4766]: healthz check failed Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.051349 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.119722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.124688 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.624667598 +0000 UTC m=+175.737060789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.221613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.221879 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.721839259 +0000 UTC m=+175.834232270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.222499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.223119 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.723098985 +0000 UTC m=+175.835491986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.324287 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.324695 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.82465156 +0000 UTC m=+175.937044581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.325255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.325709 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.82570011 +0000 UTC m=+175.938093121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.427080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.427613 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:58.927588465 +0000 UTC m=+176.039981476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.529319 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.530245 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.030227532 +0000 UTC m=+176.142620543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.631251 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.631820 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.131789838 +0000 UTC m=+176.244182849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.733084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.733590 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.23356999 +0000 UTC m=+176.345963001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.834406 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.834776 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.334759105 +0000 UTC m=+176.447152116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.911820 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ffkp6" Jan 29 11:23:58 crc kubenswrapper[4766]: I0129 11:23:58.935970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:58 crc kubenswrapper[4766]: E0129 11:23:58.936579 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.436556108 +0000 UTC m=+176.548949119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.037531 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.037848 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.537807285 +0000 UTC m=+176.650200296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.038193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.038695 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.53868306 +0000 UTC m=+176.651076141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.054011 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:23:59 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:23:59 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:23:59 crc kubenswrapper[4766]: healthz check failed Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.054523 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.139891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.140101 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.640079871 +0000 UTC m=+176.752472872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.140241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.140647 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.640637257 +0000 UTC m=+176.753030258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.241344 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.241569 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.741535034 +0000 UTC m=+176.853928045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.241754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.242342 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.742315036 +0000 UTC m=+176.854708107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.342652 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.342948 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.842912904 +0000 UTC m=+176.955305915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.444941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.445535 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:23:59.945496759 +0000 UTC m=+177.057889770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.546308 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.546554 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.04650768 +0000 UTC m=+177.158900691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.547543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.547990 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.047978452 +0000 UTC m=+177.160371453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.648606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.648829 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.148792386 +0000 UTC m=+177.261185397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.649088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.649539 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.149520057 +0000 UTC m=+177.261913068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.750134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.750334 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.25030196 +0000 UTC m=+177.362694971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.751384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.751801 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.251787053 +0000 UTC m=+177.364180064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.819299 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.820312 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.821182 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.824710 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.825542 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.836718 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.838604 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840494 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840534 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840576 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840642 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840757 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.840784 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.844798 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-phb5g" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.853013 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.853218 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.353187954 +0000 UTC m=+177.465580955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.853493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.853869 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.353861183 +0000 UTC m=+177.466254194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.892782 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.893097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.895526 4766 patch_prober.go:28] interesting pod/console-f9d7485db-ncttr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.895571 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ncttr" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.955009 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.955213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:23:59 crc kubenswrapper[4766]: I0129 11:23:59.955344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:23:59 crc kubenswrapper[4766]: E0129 11:23:59.956716 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.456691725 +0000 UTC m=+177.569084736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.002540 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" event={"ID":"06681e16-3449-44aa-9680-1f1566bca8f3","Type":"ContainerStarted","Data":"ea83e719d98d1d32871811491631ef6b948bbbd9bfd47a00e492b6ab42163a8f"} Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.002606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" event={"ID":"06681e16-3449-44aa-9680-1f1566bca8f3","Type":"ContainerStarted","Data":"86202d251b1972898cbf4dbbb641af4f42ac676dff17fa3a7f989e8dadaab751"} Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.046953 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:00 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:00 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:00 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.047030 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.056652 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.056798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.056899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.057211 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.057338 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.557314814 +0000 UTC m=+177.669707865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.093190 4766 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.106474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.140787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.160158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.160782 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.660740923 +0000 UTC m=+177.773133944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.160973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.162339 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.662319558 +0000 UTC m=+177.774712749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.167625 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.167865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.170032 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.181059 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.197450 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkwt7" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.266470 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.268670 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.76863983 +0000 UTC m=+177.881032841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.370559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.371326 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.871295937 +0000 UTC m=+177.983689138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.432322 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.435858 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.441051 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.444486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.472286 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.472907 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:00.972880593 +0000 UTC m=+178.085273604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.574634 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sngcf\" (UniqueName: \"kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.574719 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.574776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.574805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.575165 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:01.07515108 +0000 UTC m=+178.187544091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.597119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.644809 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.646313 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.650334 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.676240 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.676510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.676568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.676621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sngcf\" (UniqueName: \"kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.677241 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:01.17721837 +0000 UTC m=+178.289611391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.677749 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.678046 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.733549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sngcf\" (UniqueName: \"kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf\") pod \"certified-operators-9bpkx\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.756217 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.758718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.778396 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.778634 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.778792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.778850 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7zl\" (UniqueName: \"kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.779378 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:24:01.279363732 +0000 UTC m=+178.391756743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6xbql" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.848345 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.849555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.880437 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.880841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.880902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.880963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7zl\" (UniqueName: \"kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: E0129 11:24:00.881548 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:24:01.381526215 +0000 UTC m=+178.493919216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.882054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.882359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.936861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7zl\" (UniqueName: \"kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl\") pod \"community-operators-tx9nf\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.941831 4766 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T11:24:00.093252809Z","Handler":null,"Name":""} Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.950198 4766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.950250 4766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.955694 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.983057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.983121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.983162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:00 crc kubenswrapper[4766]: I0129 11:24:00.983213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4v74\" (UniqueName: \"kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.011862 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.016007 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.016043 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.051278 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.054592 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:01 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:01 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:01 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.054667 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.056779 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.059004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" event={"ID":"06681e16-3449-44aa-9680-1f1566bca8f3","Type":"ContainerStarted","Data":"771117dc835251a360ae213d2f35e5c23f2ce8f3e9f412cf8a689850c83ce8aa"} Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.068944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a912aac9-9f90-4b9d-a6f1-6418706260ad","Type":"ContainerStarted","Data":"a7ae5df3af534bc1049aa145c822dda86e0210426ff3400ef253c72ba8854051"} Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.071542 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.084465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.084533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4v74\" (UniqueName: \"kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.084619 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.085505 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.085721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.109541 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n4rj2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]log ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]etcd ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/max-in-flight-filter ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 11:24:01 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 11:24:01 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startinformers ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 11:24:01 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 11:24:01 crc kubenswrapper[4766]: livez check failed Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.109626 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" podUID="a99b07fd-7413-4523-8812-f0c7fe540f6d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.120877 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-crmjf" podStartSLOduration=12.12086064 podStartE2EDuration="12.12086064s" podCreationTimestamp="2026-01-29 11:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:24:01.116081353 +0000 UTC m=+178.228474364" watchObservedRunningTime="2026-01-29 11:24:01.12086064 +0000 UTC m=+178.233253651" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.132915 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4v74\" (UniqueName: \"kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74\") pod \"certified-operators-6mp9b\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.152728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6xbql\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.186343 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.186658 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tw6z\" (UniqueName: \"kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.186774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.186818 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.219052 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.273606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.286335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.289550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.289652 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.289872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tw6z\" (UniqueName: \"kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.290304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.290820 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.313434 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tw6z\" (UniqueName: \"kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z\") pod \"community-operators-bd99b\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.399463 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.433104 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.469727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.472662 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xwtsb" Jan 29 11:24:01 crc kubenswrapper[4766]: W0129 11:24:01.477625 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4adf06b_9f3e_42f1_b70f_31ec39923b11.slice/crio-8a479e2d161106294cc3ea7147073f0d6f7bbe474c7abbb090daa657810089ff WatchSource:0}: Error finding container 8a479e2d161106294cc3ea7147073f0d6f7bbe474c7abbb090daa657810089ff: Status 404 returned error can't find the container with id 8a479e2d161106294cc3ea7147073f0d6f7bbe474c7abbb090daa657810089ff Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.495405 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.501621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3910984a-a754-462f-9414-183a50bb78b8-metrics-certs\") pod \"network-metrics-daemon-xrjg5\" (UID: \"3910984a-a754-462f-9414-183a50bb78b8\") " pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.624507 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.650774 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjg5" Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.709280 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.752689 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:24:01 crc kubenswrapper[4766]: W0129 11:24:01.762050 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad6c1b2d_116e_4979_9676_c27cb40ee318.slice/crio-b4e48adb11652401ec89970243065926b7af915e6382e52cb18d5267b3466291 WatchSource:0}: Error finding container b4e48adb11652401ec89970243065926b7af915e6382e52cb18d5267b3466291: Status 404 returned error can't find the container with id b4e48adb11652401ec89970243065926b7af915e6382e52cb18d5267b3466291 Jan 29 11:24:01 crc kubenswrapper[4766]: I0129 11:24:01.763733 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.025987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrjg5"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.042687 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.049175 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:02 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:02 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:02 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.049248 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.080111 4766 generic.go:334] "Generic (PLEG): container finished" podID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerID="740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b" exitCode=0 Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.080194 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerDied","Data":"740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.080269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerStarted","Data":"b4e48adb11652401ec89970243065926b7af915e6382e52cb18d5267b3466291"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.081107 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x28zg" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.084315 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.086168 4766 generic.go:334] "Generic (PLEG): container finished" podID="a912aac9-9f90-4b9d-a6f1-6418706260ad" containerID="a34a455d8144f0c9b2e454e16ad1f88f2734d96879948f64340e846dd4d564e8" exitCode=0 Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.086323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a912aac9-9f90-4b9d-a6f1-6418706260ad","Type":"ContainerDied","Data":"a34a455d8144f0c9b2e454e16ad1f88f2734d96879948f64340e846dd4d564e8"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.096672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" event={"ID":"bf694c5f-16c8-4b89-9b66-976601ada400","Type":"ContainerStarted","Data":"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.096730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" event={"ID":"bf694c5f-16c8-4b89-9b66-976601ada400","Type":"ContainerStarted","Data":"f959a8e93b87215b4953d2b0c086ba7a58474e31a38903ccb9961ece033a7fe0"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.097619 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.102258 4766 generic.go:334] "Generic (PLEG): container finished" podID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerID="ac107e2fb5b881912697082fa61f68bcf9262d11269b42b31eb876a18ec2b5e0" exitCode=0 Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.102348 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerDied","Data":"ac107e2fb5b881912697082fa61f68bcf9262d11269b42b31eb876a18ec2b5e0"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.102390 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerStarted","Data":"8a479e2d161106294cc3ea7147073f0d6f7bbe474c7abbb090daa657810089ff"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.110782 4766 generic.go:334] "Generic (PLEG): container finished" podID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerID="08f9491f35bea61381087a8d134ae369a9246a6aa4a3f8747455304a4df9011d" exitCode=0 Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.110953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerDied","Data":"08f9491f35bea61381087a8d134ae369a9246a6aa4a3f8747455304a4df9011d"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.111012 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerStarted","Data":"5654ee4b659fbb76e08f89badb45e822f714a5a12154687843272495e574036b"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.115090 4766 generic.go:334] "Generic (PLEG): container finished" podID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerID="79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf" exitCode=0 Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.115293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerDied","Data":"79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.115362 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerStarted","Data":"db2e0d0a2dd51bd163a94cff98f2e280c03b5086ef5f7b70bb62f0e4261d31f9"} Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.187071 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9wk84" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.187447 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" podStartSLOduration=143.187404559 podStartE2EDuration="2m23.187404559s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:24:02.18603341 +0000 UTC m=+179.298426421" watchObservedRunningTime="2026-01-29 11:24:02.187404559 +0000 UTC m=+179.299797570" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.419181 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.421570 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.424162 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.463601 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.528665 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.528755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fvz\" (UniqueName: \"kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.528833 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.597575 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.599083 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.603509 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.604523 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.604781 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.630368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.632277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.637090 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.636454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.637616 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8fvz\" (UniqueName: \"kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.661496 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8fvz\" (UniqueName: \"kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz\") pod \"redhat-marketplace-plg8c\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.739011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.739114 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.748466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.827356 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.829134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.840902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.840969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.841675 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.845832 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.871091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.942759 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29kcr\" (UniqueName: \"kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.942855 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.942899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:02 crc kubenswrapper[4766]: I0129 11:24:02.951884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.032104 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.044125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29kcr\" (UniqueName: \"kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.044201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.044690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.045137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.045272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.046655 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:03 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:03 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:03 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.046704 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.067954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29kcr\" (UniqueName: \"kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr\") pod \"redhat-marketplace-nr9mw\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.138141 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerStarted","Data":"52771bf84dddca314e9a078755ec0bf804526b6d94e76edfdd320b028d2fe2a5"} Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.143522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" event={"ID":"3910984a-a754-462f-9414-183a50bb78b8","Type":"ContainerStarted","Data":"9531ef38a2eb1f1dbe8d2acc47e2f70301ac949c614c7dd8f3a3ebbedc62c2cd"} Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.143594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" event={"ID":"3910984a-a754-462f-9414-183a50bb78b8","Type":"ContainerStarted","Data":"a655525fcbbcb5301a6c68e0f5bc57c2f662b35d498271d8d0112ffab77f11f0"} Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.146019 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.152785 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:24:03 crc kubenswrapper[4766]: W0129 11:24:03.164988 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7d3f9ca7_41db_4be9_88f8_aa88e474258c.slice/crio-9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d WatchSource:0}: Error finding container 9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d: Status 404 returned error can't find the container with id 9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.247116 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.346745 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.426796 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.451194 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access\") pod \"a912aac9-9f90-4b9d-a6f1-6418706260ad\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.451242 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir\") pod \"a912aac9-9f90-4b9d-a6f1-6418706260ad\" (UID: \"a912aac9-9f90-4b9d-a6f1-6418706260ad\") " Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.451434 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a912aac9-9f90-4b9d-a6f1-6418706260ad" (UID: "a912aac9-9f90-4b9d-a6f1-6418706260ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.451658 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a912aac9-9f90-4b9d-a6f1-6418706260ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.458139 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a912aac9-9f90-4b9d-a6f1-6418706260ad" (UID: "a912aac9-9f90-4b9d-a6f1-6418706260ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.552972 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a912aac9-9f90-4b9d-a6f1-6418706260ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.831323 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:24:03 crc kubenswrapper[4766]: E0129 11:24:03.831588 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a912aac9-9f90-4b9d-a6f1-6418706260ad" containerName="pruner" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.831608 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a912aac9-9f90-4b9d-a6f1-6418706260ad" containerName="pruner" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.831728 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a912aac9-9f90-4b9d-a6f1-6418706260ad" containerName="pruner" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.832473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.834379 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.850859 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.959052 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2sg8\" (UniqueName: \"kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.959542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:03 crc kubenswrapper[4766]: I0129 11:24:03.959573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.048427 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:04 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:04 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:04 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.048554 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.060859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.060931 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.061007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2sg8\" (UniqueName: \"kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.061650 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.061673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.093453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2sg8\" (UniqueName: \"kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8\") pod \"redhat-operators-8gnkm\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.148638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.160658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerStarted","Data":"fcd41e02de378fb0deba2f12849c26b4203b13d95e12945cf0fdb8b47d5a7e0c"} Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.163792 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a912aac9-9f90-4b9d-a6f1-6418706260ad","Type":"ContainerDied","Data":"a7ae5df3af534bc1049aa145c822dda86e0210426ff3400ef253c72ba8854051"} Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.163838 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ae5df3af534bc1049aa145c822dda86e0210426ff3400ef253c72ba8854051" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.163865 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.168762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjg5" event={"ID":"3910984a-a754-462f-9414-183a50bb78b8","Type":"ContainerStarted","Data":"92bd5041372b8a2f71eb89b956aa4443f58eb75895dd3c2549086e37af64d1bc"} Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.169764 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerStarted","Data":"bf36cb9f8f0e88167cf9f737d79bbf34e11f5f58a6aca48ae208d9a4d89adf33"} Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.171822 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d3f9ca7-41db-4be9-88f8-aa88e474258c","Type":"ContainerStarted","Data":"9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d"} Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.224393 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.229688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.272181 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.275235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.277395 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.277473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqq9c\" (UniqueName: \"kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.381062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.381510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqq9c\" (UniqueName: \"kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.381568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.382133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.382432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.439639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqq9c\" (UniqueName: \"kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c\") pod \"redhat-operators-mpsxm\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.483884 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:24:04 crc kubenswrapper[4766]: W0129 11:24:04.489497 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a36521a_d4cf_4c8e_8dbe_61599b472068.slice/crio-c3678c7dde3b21a4082f3c5916dcaa0338b5a338bb2a36d1bc754bdd618a7bbf WatchSource:0}: Error finding container c3678c7dde3b21a4082f3c5916dcaa0338b5a338bb2a36d1bc754bdd618a7bbf: Status 404 returned error can't find the container with id c3678c7dde3b21a4082f3c5916dcaa0338b5a338bb2a36d1bc754bdd618a7bbf Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.595172 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:24:04 crc kubenswrapper[4766]: I0129 11:24:04.837218 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:24:04 crc kubenswrapper[4766]: W0129 11:24:04.847073 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa1d4f87_07d9_4499_a955_15f90a40a4ad.slice/crio-b73fe5ef91a070ab5c332eab9844518154ff9d2a6d1a4fbd0e7ab1ab71ad7ac7 WatchSource:0}: Error finding container b73fe5ef91a070ab5c332eab9844518154ff9d2a6d1a4fbd0e7ab1ab71ad7ac7: Status 404 returned error can't find the container with id b73fe5ef91a070ab5c332eab9844518154ff9d2a6d1a4fbd0e7ab1ab71ad7ac7 Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.046968 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:05 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:05 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:05 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.047448 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.171859 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.178152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n4rj2" Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.201038 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a615f4a-f498-4abb-be15-10f224ff84df" containerID="fcd41e02de378fb0deba2f12849c26b4203b13d95e12945cf0fdb8b47d5a7e0c" exitCode=0 Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.201450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerDied","Data":"fcd41e02de378fb0deba2f12849c26b4203b13d95e12945cf0fdb8b47d5a7e0c"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.212310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerStarted","Data":"b756eb040b45cb3adb12677d2ba3e909cc54ab18c5026320c95bd50d8829b045"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.212369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerStarted","Data":"c3678c7dde3b21a4082f3c5916dcaa0338b5a338bb2a36d1bc754bdd618a7bbf"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.223236 4766 generic.go:334] "Generic (PLEG): container finished" podID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerID="a78b5b662c97e86f238ac8fa2120ef215b98b41d2804578f2d47b35ab0e7265f" exitCode=0 Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.223366 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerDied","Data":"a78b5b662c97e86f238ac8fa2120ef215b98b41d2804578f2d47b35ab0e7265f"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.239559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d3f9ca7-41db-4be9-88f8-aa88e474258c","Type":"ContainerStarted","Data":"db9caf12dee75989308af0a863ac1061fc2a10d0d37d6bc2d8cfcbd219f63f1f"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.245366 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerStarted","Data":"b73fe5ef91a070ab5c332eab9844518154ff9d2a6d1a4fbd0e7ab1ab71ad7ac7"} Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.578625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.578592113 podStartE2EDuration="3.578592113s" podCreationTimestamp="2026-01-29 11:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:24:05.572723905 +0000 UTC m=+182.685116916" watchObservedRunningTime="2026-01-29 11:24:05.578592113 +0000 UTC m=+182.690985134" Jan 29 11:24:05 crc kubenswrapper[4766]: I0129 11:24:05.639285 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xrjg5" podStartSLOduration=146.639249012 podStartE2EDuration="2m26.639249012s" podCreationTimestamp="2026-01-29 11:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:24:05.63039362 +0000 UTC m=+182.742786631" watchObservedRunningTime="2026-01-29 11:24:05.639249012 +0000 UTC m=+182.751642033" Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.046791 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:06 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:06 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:06 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.046899 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.298127 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerID="5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17" exitCode=0 Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.298271 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerDied","Data":"5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17"} Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.323839 4766 generic.go:334] "Generic (PLEG): container finished" podID="3cfb993e-e305-4ad1-81f6-349bc2544e60" containerID="c94fa72e9e11ff303d0e43ca27cb9b3db4a372d5279771b7dce50783145d6354" exitCode=0 Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.323998 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" event={"ID":"3cfb993e-e305-4ad1-81f6-349bc2544e60","Type":"ContainerDied","Data":"c94fa72e9e11ff303d0e43ca27cb9b3db4a372d5279771b7dce50783145d6354"} Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.342611 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerID="b756eb040b45cb3adb12677d2ba3e909cc54ab18c5026320c95bd50d8829b045" exitCode=0 Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.342713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerDied","Data":"b756eb040b45cb3adb12677d2ba3e909cc54ab18c5026320c95bd50d8829b045"} Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.353817 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d3f9ca7-41db-4be9-88f8-aa88e474258c" containerID="db9caf12dee75989308af0a863ac1061fc2a10d0d37d6bc2d8cfcbd219f63f1f" exitCode=0 Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.355087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d3f9ca7-41db-4be9-88f8-aa88e474258c","Type":"ContainerDied","Data":"db9caf12dee75989308af0a863ac1061fc2a10d0d37d6bc2d8cfcbd219f63f1f"} Jan 29 11:24:06 crc kubenswrapper[4766]: I0129 11:24:06.821645 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cnns4" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.046600 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:07 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:07 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:07 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.047289 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.831646 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.904365 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.967341 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir\") pod \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.967504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d3f9ca7-41db-4be9-88f8-aa88e474258c" (UID: "7d3f9ca7-41db-4be9-88f8-aa88e474258c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.968736 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume\") pod \"3cfb993e-e305-4ad1-81f6-349bc2544e60\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.968807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access\") pod \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\" (UID: \"7d3f9ca7-41db-4be9-88f8-aa88e474258c\") " Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.968863 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4846\" (UniqueName: \"kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846\") pod \"3cfb993e-e305-4ad1-81f6-349bc2544e60\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.968912 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") pod \"3cfb993e-e305-4ad1-81f6-349bc2544e60\" (UID: \"3cfb993e-e305-4ad1-81f6-349bc2544e60\") " Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.969355 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.970234 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume" (OuterVolumeSpecName: "config-volume") pod "3cfb993e-e305-4ad1-81f6-349bc2544e60" (UID: "3cfb993e-e305-4ad1-81f6-349bc2544e60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.978293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d3f9ca7-41db-4be9-88f8-aa88e474258c" (UID: "7d3f9ca7-41db-4be9-88f8-aa88e474258c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.979137 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3cfb993e-e305-4ad1-81f6-349bc2544e60" (UID: "3cfb993e-e305-4ad1-81f6-349bc2544e60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:24:07 crc kubenswrapper[4766]: I0129 11:24:07.979844 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846" (OuterVolumeSpecName: "kube-api-access-x4846") pod "3cfb993e-e305-4ad1-81f6-349bc2544e60" (UID: "3cfb993e-e305-4ad1-81f6-349bc2544e60"). InnerVolumeSpecName "kube-api-access-x4846". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.045844 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:08 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:08 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:08 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.045931 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.070520 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cfb993e-e305-4ad1-81f6-349bc2544e60-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.070555 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d3f9ca7-41db-4be9-88f8-aa88e474258c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.070567 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4846\" (UniqueName: \"kubernetes.io/projected/3cfb993e-e305-4ad1-81f6-349bc2544e60-kube-api-access-x4846\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.070577 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfb993e-e305-4ad1-81f6-349bc2544e60-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.461500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" event={"ID":"3cfb993e-e305-4ad1-81f6-349bc2544e60","Type":"ContainerDied","Data":"867e8a293f567b84226c1f7e52f48de7e291c81a4c144d872f2b53a9fdcf3dac"} Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.461554 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867e8a293f567b84226c1f7e52f48de7e291c81a4c144d872f2b53a9fdcf3dac" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.461674 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.468713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d3f9ca7-41db-4be9-88f8-aa88e474258c","Type":"ContainerDied","Data":"9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d"} Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.468782 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9921f208dc27868b075b7c863d4d4ccb3dce07d26fa007aec16a53a068daf36d" Jan 29 11:24:08 crc kubenswrapper[4766]: I0129 11:24:08.468857 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.045018 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:09 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:09 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:09 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.045121 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.838650 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.839177 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.838650 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.839240 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.893592 4766 patch_prober.go:28] interesting pod/console-f9d7485db-ncttr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 29 11:24:09 crc kubenswrapper[4766]: I0129 11:24:09.893679 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ncttr" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 29 11:24:10 crc kubenswrapper[4766]: I0129 11:24:10.046449 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:10 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:10 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:10 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:10 crc kubenswrapper[4766]: I0129 11:24:10.046590 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:11 crc kubenswrapper[4766]: I0129 11:24:11.045372 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:11 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:11 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:11 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:11 crc kubenswrapper[4766]: I0129 11:24:11.045652 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:12 crc kubenswrapper[4766]: I0129 11:24:12.047920 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:12 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:12 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:12 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:12 crc kubenswrapper[4766]: I0129 11:24:12.047996 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:13 crc kubenswrapper[4766]: I0129 11:24:13.045614 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:13 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:13 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:13 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:13 crc kubenswrapper[4766]: I0129 11:24:13.045679 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:14 crc kubenswrapper[4766]: I0129 11:24:14.046335 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:14 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:14 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:14 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:14 crc kubenswrapper[4766]: I0129 11:24:14.046497 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:15 crc kubenswrapper[4766]: I0129 11:24:15.047933 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:15 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:15 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:15 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:15 crc kubenswrapper[4766]: I0129 11:24:15.048513 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:16 crc kubenswrapper[4766]: I0129 11:24:16.044970 4766 patch_prober.go:28] interesting pod/router-default-5444994796-h54ww container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:24:16 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 29 11:24:16 crc kubenswrapper[4766]: [+]process-running ok Jan 29 11:24:16 crc kubenswrapper[4766]: healthz check failed Jan 29 11:24:16 crc kubenswrapper[4766]: I0129 11:24:16.045040 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h54ww" podUID="faf12f57-ca0e-47d4-bb9c-06b758d0ebbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:24:16 crc kubenswrapper[4766]: I0129 11:24:16.361954 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:24:16 crc kubenswrapper[4766]: I0129 11:24:16.362031 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:24:17 crc kubenswrapper[4766]: I0129 11:24:17.045986 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:24:17 crc kubenswrapper[4766]: I0129 11:24:17.049015 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h54ww" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.731347 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.731939 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" containerID="cri-o://c9ed418193af1a1da51ce2e39098ae9b1ab5ffe0809dc825165924034906ce9c" gracePeriod=30 Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.739299 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.739634 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" containerID="cri-o://789284ba86ae621d070e8ac02e93c9c37a8dd53e9a8bf96804c1378015868a4c" gracePeriod=30 Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.838832 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839114 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839112 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839182 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839815 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0be8260cba8279db0c93236ca7106096debed7784643b6f1e3faf12f21a7ddb5"} pod="openshift-console/downloads-7954f5f757-bqx75" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.839990 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" containerID="cri-o://0be8260cba8279db0c93236ca7106096debed7784643b6f1e3faf12f21a7ddb5" gracePeriod=2 Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.840092 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.840179 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.865206 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zj2l7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.865312 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.939916 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:24:19 crc kubenswrapper[4766]: I0129 11:24:19.946096 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:24:20 crc kubenswrapper[4766]: I0129 11:24:20.167916 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 11:24:20 crc kubenswrapper[4766]: I0129 11:24:20.167987 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 11:24:21 crc kubenswrapper[4766]: I0129 11:24:21.293533 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:24:21 crc kubenswrapper[4766]: I0129 11:24:21.719514 4766 generic.go:334] "Generic (PLEG): container finished" podID="3992a1ef-5774-468c-9640-cd23218862cc" containerID="0be8260cba8279db0c93236ca7106096debed7784643b6f1e3faf12f21a7ddb5" exitCode=0 Jan 29 11:24:21 crc kubenswrapper[4766]: I0129 11:24:21.719634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bqx75" event={"ID":"3992a1ef-5774-468c-9640-cd23218862cc","Type":"ContainerDied","Data":"0be8260cba8279db0c93236ca7106096debed7784643b6f1e3faf12f21a7ddb5"} Jan 29 11:24:21 crc kubenswrapper[4766]: I0129 11:24:21.722942 4766 generic.go:334] "Generic (PLEG): container finished" podID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerID="c9ed418193af1a1da51ce2e39098ae9b1ab5ffe0809dc825165924034906ce9c" exitCode=0 Jan 29 11:24:21 crc kubenswrapper[4766]: I0129 11:24:21.723119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" event={"ID":"2cf63d06-b674-4a7b-b896-5c78bc9d412d","Type":"ContainerDied","Data":"c9ed418193af1a1da51ce2e39098ae9b1ab5ffe0809dc825165924034906ce9c"} Jan 29 11:24:22 crc kubenswrapper[4766]: I0129 11:24:22.732303 4766 generic.go:334] "Generic (PLEG): container finished" podID="f093c2f4-8a68-4d38-b957-21dd36402984" containerID="789284ba86ae621d070e8ac02e93c9c37a8dd53e9a8bf96804c1378015868a4c" exitCode=0 Jan 29 11:24:22 crc kubenswrapper[4766]: I0129 11:24:22.732360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" event={"ID":"f093c2f4-8a68-4d38-b957-21dd36402984","Type":"ContainerDied","Data":"789284ba86ae621d070e8ac02e93c9c37a8dd53e9a8bf96804c1378015868a4c"} Jan 29 11:24:29 crc kubenswrapper[4766]: I0129 11:24:29.839449 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:29 crc kubenswrapper[4766]: I0129 11:24:29.839927 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:30 crc kubenswrapper[4766]: I0129 11:24:30.170145 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 11:24:30 crc kubenswrapper[4766]: I0129 11:24:30.170528 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 11:24:30 crc kubenswrapper[4766]: I0129 11:24:30.865264 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zj2l7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:24:30 crc kubenswrapper[4766]: I0129 11:24:30.865427 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:24:32 crc kubenswrapper[4766]: I0129 11:24:32.078825 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kt5b7" Jan 29 11:24:34 crc kubenswrapper[4766]: I0129 11:24:34.158200 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.202109 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:24:39 crc kubenswrapper[4766]: E0129 11:24:39.202990 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfb993e-e305-4ad1-81f6-349bc2544e60" containerName="collect-profiles" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.203015 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfb993e-e305-4ad1-81f6-349bc2544e60" containerName="collect-profiles" Jan 29 11:24:39 crc kubenswrapper[4766]: E0129 11:24:39.203053 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3f9ca7-41db-4be9-88f8-aa88e474258c" containerName="pruner" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.203067 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3f9ca7-41db-4be9-88f8-aa88e474258c" containerName="pruner" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.203386 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3f9ca7-41db-4be9-88f8-aa88e474258c" containerName="pruner" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.203821 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfb993e-e305-4ad1-81f6-349bc2544e60" containerName="collect-profiles" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.205273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.209775 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.209775 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.220917 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.376782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.376857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.478857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.479674 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.479875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.514304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.541377 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.839558 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:39 crc kubenswrapper[4766]: I0129 11:24:39.840399 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.169099 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.169187 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.865048 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zj2l7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.865119 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.896603 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.929389 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:24:40 crc kubenswrapper[4766]: E0129 11:24:40.932155 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.932264 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.932492 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" containerName="controller-manager" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.933157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:40 crc kubenswrapper[4766]: I0129 11:24:40.940927 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.003101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config\") pod \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.003277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert\") pod \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.003829 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkznx\" (UniqueName: \"kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx\") pod \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.003901 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca\") pod \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.003948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles\") pod \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\" (UID: \"2cf63d06-b674-4a7b-b896-5c78bc9d412d\") " Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004122 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004152 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004230 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004486 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbj5\" (UniqueName: \"kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config" (OuterVolumeSpecName: "config") pod "2cf63d06-b674-4a7b-b896-5c78bc9d412d" (UID: "2cf63d06-b674-4a7b-b896-5c78bc9d412d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.004915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2cf63d06-b674-4a7b-b896-5c78bc9d412d" (UID: "2cf63d06-b674-4a7b-b896-5c78bc9d412d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.005712 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cf63d06-b674-4a7b-b896-5c78bc9d412d" (UID: "2cf63d06-b674-4a7b-b896-5c78bc9d412d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.008267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cf63d06-b674-4a7b-b896-5c78bc9d412d" (UID: "2cf63d06-b674-4a7b-b896-5c78bc9d412d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.009134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx" (OuterVolumeSpecName: "kube-api-access-gkznx") pod "2cf63d06-b674-4a7b-b896-5c78bc9d412d" (UID: "2cf63d06-b674-4a7b-b896-5c78bc9d412d"). InnerVolumeSpecName "kube-api-access-gkznx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.105964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106151 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbj5\" (UniqueName: \"kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106496 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf63d06-b674-4a7b-b896-5c78bc9d412d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106528 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkznx\" (UniqueName: \"kubernetes.io/projected/2cf63d06-b674-4a7b-b896-5c78bc9d412d-kube-api-access-gkznx\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106547 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106560 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.106571 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf63d06-b674-4a7b-b896-5c78bc9d412d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.107240 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.108187 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.108307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.109879 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.127022 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbj5\" (UniqueName: \"kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5\") pod \"controller-manager-d558b78b6-6psxn\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.254301 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.861199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" event={"ID":"2cf63d06-b674-4a7b-b896-5c78bc9d412d","Type":"ContainerDied","Data":"5c6291896619dcd1a2a78864038ecfa35d04f183dc8d9a419729d8a37613fb5d"} Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.861270 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zj2l7" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.861306 4766 scope.go:117] "RemoveContainer" containerID="c9ed418193af1a1da51ce2e39098ae9b1ab5ffe0809dc825165924034906ce9c" Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.881134 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:24:41 crc kubenswrapper[4766]: I0129 11:24:41.884678 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zj2l7"] Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.233349 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf63d06-b674-4a7b-b896-5c78bc9d412d" path="/var/lib/kubelet/pods/2cf63d06-b674-4a7b-b896-5c78bc9d412d/volumes" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.391166 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.392479 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.411646 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.553362 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.553457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.553495 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.655352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.655483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.655527 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.655556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.655680 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.681017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:43 crc kubenswrapper[4766]: I0129 11:24:43.730602 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:24:46 crc kubenswrapper[4766]: I0129 11:24:46.362560 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:24:46 crc kubenswrapper[4766]: I0129 11:24:46.362652 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:24:46 crc kubenswrapper[4766]: E0129 11:24:46.473331 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:24:46 crc kubenswrapper[4766]: E0129 11:24:46.474044 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8fvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-plg8c_openshift-marketplace(8a615f4a-f498-4abb-be15-10f224ff84df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:24:46 crc kubenswrapper[4766]: E0129 11:24:46.475361 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-plg8c" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" Jan 29 11:24:49 crc kubenswrapper[4766]: I0129 11:24:49.840723 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:49 crc kubenswrapper[4766]: I0129 11:24:49.840788 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:24:51 crc kubenswrapper[4766]: I0129 11:24:51.168828 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:24:51 crc kubenswrapper[4766]: I0129 11:24:51.170663 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:24:59 crc kubenswrapper[4766]: I0129 11:24:59.838435 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:24:59 crc kubenswrapper[4766]: I0129 11:24:59.839019 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:01 crc kubenswrapper[4766]: I0129 11:25:01.168016 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:01 crc kubenswrapper[4766]: I0129 11:25:01.168111 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:06 crc kubenswrapper[4766]: E0129 11:25:06.953319 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:25:06 crc kubenswrapper[4766]: E0129 11:25:06.954115 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2sg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8gnkm_openshift-marketplace(8a36521a-d4cf-4c8e-8dbe-61599b472068): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:06 crc kubenswrapper[4766]: E0129 11:25:06.955382 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8gnkm" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" Jan 29 11:25:09 crc kubenswrapper[4766]: I0129 11:25:09.839152 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:09 crc kubenswrapper[4766]: I0129 11:25:09.839284 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:11 crc kubenswrapper[4766]: I0129 11:25:11.168195 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:11 crc kubenswrapper[4766]: I0129 11:25:11.168285 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:16 crc kubenswrapper[4766]: E0129 11:25:16.190054 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8gnkm" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" Jan 29 11:25:16 crc kubenswrapper[4766]: I0129 11:25:16.362506 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:25:16 crc kubenswrapper[4766]: I0129 11:25:16.362574 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:25:16 crc kubenswrapper[4766]: I0129 11:25:16.362627 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:25:16 crc kubenswrapper[4766]: I0129 11:25:16.363321 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:25:16 crc kubenswrapper[4766]: I0129 11:25:16.363380 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576" gracePeriod=600 Jan 29 11:25:16 crc kubenswrapper[4766]: E0129 11:25:16.597549 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:25:16 crc kubenswrapper[4766]: E0129 11:25:16.597995 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29kcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nr9mw_openshift-marketplace(6da41cd3-3d8e-498a-a988-b5d711bca9d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:16 crc kubenswrapper[4766]: E0129 11:25:16.599238 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nr9mw" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" Jan 29 11:25:17 crc kubenswrapper[4766]: I0129 11:25:17.146155 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576" exitCode=0 Jan 29 11:25:17 crc kubenswrapper[4766]: I0129 11:25:17.146244 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576"} Jan 29 11:25:19 crc kubenswrapper[4766]: I0129 11:25:19.839345 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:19 crc kubenswrapper[4766]: I0129 11:25:19.839447 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:21 crc kubenswrapper[4766]: I0129 11:25:21.167864 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:21 crc kubenswrapper[4766]: I0129 11:25:21.167957 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:29 crc kubenswrapper[4766]: I0129 11:25:29.838532 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:29 crc kubenswrapper[4766]: I0129 11:25:29.839117 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:31 crc kubenswrapper[4766]: I0129 11:25:31.168570 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:31 crc kubenswrapper[4766]: I0129 11:25:31.168703 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:39 crc kubenswrapper[4766]: I0129 11:25:39.839653 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:39 crc kubenswrapper[4766]: I0129 11:25:39.841526 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:41 crc kubenswrapper[4766]: I0129 11:25:41.168483 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:47 crc kubenswrapper[4766]: I0129 11:25:41.168919 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:47 crc kubenswrapper[4766]: E0129 11:25:47.104921 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:25:47 crc kubenswrapper[4766]: E0129 11:25:47.105088 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sngcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9bpkx_openshift-marketplace(d4adf06b-9f3e-42f1-b70f-31ec39923b11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:47 crc kubenswrapper[4766]: E0129 11:25:47.106453 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9bpkx" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" Jan 29 11:25:49 crc kubenswrapper[4766]: I0129 11:25:49.838830 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:49 crc kubenswrapper[4766]: I0129 11:25:49.839220 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:25:51 crc kubenswrapper[4766]: I0129 11:25:51.167896 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fs4gv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:25:51 crc kubenswrapper[4766]: I0129 11:25:51.167974 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:25:51 crc kubenswrapper[4766]: I0129 11:25:51.168070 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:25:54 crc kubenswrapper[4766]: E0129 11:25:54.420037 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9bpkx" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" Jan 29 11:25:54 crc kubenswrapper[4766]: E0129 11:25:54.451024 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:25:54 crc kubenswrapper[4766]: E0129 11:25:54.451303 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8fvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-plg8c_openshift-marketplace(8a615f4a-f498-4abb-be15-10f224ff84df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:54 crc kubenswrapper[4766]: E0129 11:25:54.453835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-plg8c" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.502698 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.530217 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:25:54 crc kubenswrapper[4766]: E0129 11:25:54.530870 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.530884 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.531031 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" containerName="route-controller-manager" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.531494 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.533735 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.544964 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzxpp\" (UniqueName: \"kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp\") pod \"f093c2f4-8a68-4d38-b957-21dd36402984\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545073 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca\") pod \"f093c2f4-8a68-4d38-b957-21dd36402984\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545129 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config\") pod \"f093c2f4-8a68-4d38-b957-21dd36402984\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545192 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert\") pod \"f093c2f4-8a68-4d38-b957-21dd36402984\" (UID: \"f093c2f4-8a68-4d38-b957-21dd36402984\") " Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545440 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545506 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrwk\" (UniqueName: \"kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.545581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.546634 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca" (OuterVolumeSpecName: "client-ca") pod "f093c2f4-8a68-4d38-b957-21dd36402984" (UID: "f093c2f4-8a68-4d38-b957-21dd36402984"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.547287 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config" (OuterVolumeSpecName: "config") pod "f093c2f4-8a68-4d38-b957-21dd36402984" (UID: "f093c2f4-8a68-4d38-b957-21dd36402984"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.594345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f093c2f4-8a68-4d38-b957-21dd36402984" (UID: "f093c2f4-8a68-4d38-b957-21dd36402984"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.598819 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp" (OuterVolumeSpecName: "kube-api-access-wzxpp") pod "f093c2f4-8a68-4d38-b957-21dd36402984" (UID: "f093c2f4-8a68-4d38-b957-21dd36402984"). InnerVolumeSpecName "kube-api-access-wzxpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrwk\" (UniqueName: \"kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646836 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzxpp\" (UniqueName: \"kubernetes.io/projected/f093c2f4-8a68-4d38-b957-21dd36402984-kube-api-access-wzxpp\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646846 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646856 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f093c2f4-8a68-4d38-b957-21dd36402984-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.646866 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f093c2f4-8a68-4d38-b957-21dd36402984-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.647985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.649436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.653322 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.664955 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrwk\" (UniqueName: \"kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk\") pod \"route-controller-manager-67d67997fd-npc4k\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.917521 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:25:54 crc kubenswrapper[4766]: I0129 11:25:54.936543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:25:55 crc kubenswrapper[4766]: I0129 11:25:55.392482 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" event={"ID":"f093c2f4-8a68-4d38-b957-21dd36402984","Type":"ContainerDied","Data":"ed10e7f650ff7f442fc6a3499d8c7693f70d810f602bb22f1b6b4aa1ab048d2f"} Jan 29 11:25:55 crc kubenswrapper[4766]: I0129 11:25:55.392512 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv" Jan 29 11:25:55 crc kubenswrapper[4766]: I0129 11:25:55.392547 4766 scope.go:117] "RemoveContainer" containerID="789284ba86ae621d070e8ac02e93c9c37a8dd53e9a8bf96804c1378015868a4c" Jan 29 11:25:55 crc kubenswrapper[4766]: I0129 11:25:55.413078 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:25:55 crc kubenswrapper[4766]: I0129 11:25:55.417160 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fs4gv"] Jan 29 11:25:56 crc kubenswrapper[4766]: E0129 11:25:56.422760 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:25:56 crc kubenswrapper[4766]: E0129 11:25:56.423030 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8k7zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tx9nf_openshift-marketplace(43d854e2-61c5-46d0-a85f-575c5fc51fa4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:56 crc kubenswrapper[4766]: E0129 11:25:56.424193 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tx9nf" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" Jan 29 11:25:57 crc kubenswrapper[4766]: E0129 11:25:57.156335 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:25:57 crc kubenswrapper[4766]: E0129 11:25:57.156957 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tw6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bd99b_openshift-marketplace(ad6c1b2d-116e-4979-9676-c27cb40ee318): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:57 crc kubenswrapper[4766]: E0129 11:25:57.158243 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bd99b" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" Jan 29 11:25:57 crc kubenswrapper[4766]: I0129 11:25:57.237263 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f093c2f4-8a68-4d38-b957-21dd36402984" path="/var/lib/kubelet/pods/f093c2f4-8a68-4d38-b957-21dd36402984/volumes" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.185004 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.185217 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4v74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6mp9b_openshift-marketplace(74f9c23f-66e4-4082-b80f-f4966819b6d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.186466 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6mp9b" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.816805 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.817192 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqq9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mpsxm_openshift-marketplace(aa1d4f87-07d9-4499-a955-15f90a40a4ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:25:58 crc kubenswrapper[4766]: E0129 11:25:58.818467 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mpsxm" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" Jan 29 11:25:59 crc kubenswrapper[4766]: E0129 11:25:59.327944 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tx9nf" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" Jan 29 11:25:59 crc kubenswrapper[4766]: E0129 11:25:59.328051 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6mp9b" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" Jan 29 11:25:59 crc kubenswrapper[4766]: E0129 11:25:59.328138 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bd99b" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" Jan 29 11:25:59 crc kubenswrapper[4766]: E0129 11:25:59.328202 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-plg8c" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.437437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"46db8397-242b-4386-be3d-d737aef0c878","Type":"ContainerStarted","Data":"8ee7c94816a8872a4422704007e6ad201ce57f56a8ab6a7b2cba4bfc3034834f"} Jan 29 11:25:59 crc kubenswrapper[4766]: E0129 11:25:59.458667 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mpsxm" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.537429 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.698663 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:25:59 crc kubenswrapper[4766]: W0129 11:25:59.737358 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688635bd_e6d5_43bc_a5b8_21f485a3621b.slice/crio-9ad29b0bcf980cb4f06e39a023eef478a1687b07dcf3cf5cab340009af0a2257 WatchSource:0}: Error finding container 9ad29b0bcf980cb4f06e39a023eef478a1687b07dcf3cf5cab340009af0a2257: Status 404 returned error can't find the container with id 9ad29b0bcf980cb4f06e39a023eef478a1687b07dcf3cf5cab340009af0a2257 Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.752722 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:25:59 crc kubenswrapper[4766]: W0129 11:25:59.766243 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96faba9a_6377_4d35_8809_ec064f590a37.slice/crio-cec6fdd5d9650ef5362d79977023b4b3e64fc2232afa22911e89b2cc33bc7a51 WatchSource:0}: Error finding container cec6fdd5d9650ef5362d79977023b4b3e64fc2232afa22911e89b2cc33bc7a51: Status 404 returned error can't find the container with id cec6fdd5d9650ef5362d79977023b4b3e64fc2232afa22911e89b2cc33bc7a51 Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.839388 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:25:59 crc kubenswrapper[4766]: I0129 11:25:59.839454 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.460081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerStarted","Data":"7c6847a659cf8ddc25326f6f6250201535668cecbf34731d409726760a0c7c65"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.465123 4766 generic.go:334] "Generic (PLEG): container finished" podID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerID="c25cca66c21c39ecaa52b292272dac881487b8430a7cad3069f391a6cd977c5b" exitCode=0 Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.465236 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerDied","Data":"c25cca66c21c39ecaa52b292272dac881487b8430a7cad3069f391a6cd977c5b"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.471232 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"46db8397-242b-4386-be3d-d737aef0c878","Type":"ContainerStarted","Data":"4f7dbfdf146d509169dafdc3d25ad327eb23bd8cbf0c97ee50b94209b3943952"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.474762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" event={"ID":"688635bd-e6d5-43bc-a5b8-21f485a3621b","Type":"ContainerStarted","Data":"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.474820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" event={"ID":"688635bd-e6d5-43bc-a5b8-21f485a3621b","Type":"ContainerStarted","Data":"9ad29b0bcf980cb4f06e39a023eef478a1687b07dcf3cf5cab340009af0a2257"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.475893 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.482694 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.486285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.490637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bqx75" event={"ID":"3992a1ef-5774-468c-9640-cd23218862cc","Type":"ContainerStarted","Data":"2f52017b133e0cc35e0291c56b3028879328e1fd12c67eb230dab2769df38b88"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.491624 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.495608 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.495676 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.497293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4a84c5fe-7616-4823-9559-2f1a6dc0237e","Type":"ContainerStarted","Data":"dc41a64e8ad1254e4ab7bb6acc80cf8822fa97599c5db51b692cfaf4d8596ce6"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.497322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4a84c5fe-7616-4823-9559-2f1a6dc0237e","Type":"ContainerStarted","Data":"20d7916b0232c2f57bb5f8c3d043b8778741a0da07f4c3bcdcbcaabaad640907"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.502549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" event={"ID":"96faba9a-6377-4d35-8809-ec064f590a37","Type":"ContainerStarted","Data":"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.502577 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" event={"ID":"96faba9a-6377-4d35-8809-ec064f590a37","Type":"ContainerStarted","Data":"cec6fdd5d9650ef5362d79977023b4b3e64fc2232afa22911e89b2cc33bc7a51"} Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.503872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.511641 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" podStartSLOduration=81.511623759 podStartE2EDuration="1m21.511623759s" podCreationTimestamp="2026-01-29 11:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:26:00.509110718 +0000 UTC m=+297.621503929" watchObservedRunningTime="2026-01-29 11:26:00.511623759 +0000 UTC m=+297.624016770" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.516346 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.579926 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=81.579903945 podStartE2EDuration="1m21.579903945s" podCreationTimestamp="2026-01-29 11:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:26:00.555518017 +0000 UTC m=+297.667911058" watchObservedRunningTime="2026-01-29 11:26:00.579903945 +0000 UTC m=+297.692296956" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.595332 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" podStartSLOduration=81.595310156 podStartE2EDuration="1m21.595310156s" podCreationTimestamp="2026-01-29 11:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:26:00.594934412 +0000 UTC m=+297.707327423" watchObservedRunningTime="2026-01-29 11:26:00.595310156 +0000 UTC m=+297.707703187" Jan 29 11:26:00 crc kubenswrapper[4766]: I0129 11:26:00.649099 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=77.649078723 podStartE2EDuration="1m17.649078723s" podCreationTimestamp="2026-01-29 11:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:26:00.647756525 +0000 UTC m=+297.760149536" watchObservedRunningTime="2026-01-29 11:26:00.649078723 +0000 UTC m=+297.761471734" Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.518200 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerID="7c6847a659cf8ddc25326f6f6250201535668cecbf34731d409726760a0c7c65" exitCode=0 Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.518299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerDied","Data":"7c6847a659cf8ddc25326f6f6250201535668cecbf34731d409726760a0c7c65"} Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.522572 4766 generic.go:334] "Generic (PLEG): container finished" podID="46db8397-242b-4386-be3d-d737aef0c878" containerID="4f7dbfdf146d509169dafdc3d25ad327eb23bd8cbf0c97ee50b94209b3943952" exitCode=0 Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.523713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"46db8397-242b-4386-be3d-d737aef0c878","Type":"ContainerDied","Data":"4f7dbfdf146d509169dafdc3d25ad327eb23bd8cbf0c97ee50b94209b3943952"} Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.523769 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:26:01 crc kubenswrapper[4766]: I0129 11:26:01.523868 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.530062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerStarted","Data":"ebd6058e4f4c04ae01f565703745d5c00713a10ea2c182e01278af2c2a57b87c"} Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.534096 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-bqx75 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.534148 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bqx75" podUID="3992a1ef-5774-468c-9640-cd23218862cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.534590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerStarted","Data":"9eb1c5d4912a6176f3e7f1b1235819b5c894a7528a603b7bd0eb7c77c6afa7eb"} Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.556571 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nr9mw" podStartSLOduration=5.561414307 podStartE2EDuration="2m0.556541099s" podCreationTimestamp="2026-01-29 11:24:02 +0000 UTC" firstStartedPulling="2026-01-29 11:24:06.356208825 +0000 UTC m=+183.468601836" lastFinishedPulling="2026-01-29 11:26:01.351335617 +0000 UTC m=+298.463728628" observedRunningTime="2026-01-29 11:26:02.554399871 +0000 UTC m=+299.666792882" watchObservedRunningTime="2026-01-29 11:26:02.556541099 +0000 UTC m=+299.668934130" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.825848 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.879218 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir\") pod \"46db8397-242b-4386-be3d-d737aef0c878\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.879514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access\") pod \"46db8397-242b-4386-be3d-d737aef0c878\" (UID: \"46db8397-242b-4386-be3d-d737aef0c878\") " Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.879739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "46db8397-242b-4386-be3d-d737aef0c878" (UID: "46db8397-242b-4386-be3d-d737aef0c878"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.891834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "46db8397-242b-4386-be3d-d737aef0c878" (UID: "46db8397-242b-4386-be3d-d737aef0c878"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.981866 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46db8397-242b-4386-be3d-d737aef0c878-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:02 crc kubenswrapper[4766]: I0129 11:26:02.981916 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46db8397-242b-4386-be3d-d737aef0c878-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.155541 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.155698 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.541143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"46db8397-242b-4386-be3d-d737aef0c878","Type":"ContainerDied","Data":"8ee7c94816a8872a4422704007e6ad201ce57f56a8ab6a7b2cba4bfc3034834f"} Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.541459 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee7c94816a8872a4422704007e6ad201ce57f56a8ab6a7b2cba4bfc3034834f" Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.541239 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:26:03 crc kubenswrapper[4766]: I0129 11:26:03.564189 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gnkm" podStartSLOduration=4.644541458 podStartE2EDuration="2m0.564159777s" podCreationTimestamp="2026-01-29 11:24:03 +0000 UTC" firstStartedPulling="2026-01-29 11:24:06.346614602 +0000 UTC m=+183.459007613" lastFinishedPulling="2026-01-29 11:26:02.266232921 +0000 UTC m=+299.378625932" observedRunningTime="2026-01-29 11:26:03.558858004 +0000 UTC m=+300.671251015" watchObservedRunningTime="2026-01-29 11:26:03.564159777 +0000 UTC m=+300.676552818" Jan 29 11:26:04 crc kubenswrapper[4766]: I0129 11:26:04.149005 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:26:04 crc kubenswrapper[4766]: I0129 11:26:04.149087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:26:04 crc kubenswrapper[4766]: I0129 11:26:04.472107 4766 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 11:26:04 crc kubenswrapper[4766]: I0129 11:26:04.528205 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nr9mw" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="registry-server" probeResult="failure" output=< Jan 29 11:26:04 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 11:26:04 crc kubenswrapper[4766]: > Jan 29 11:26:05 crc kubenswrapper[4766]: I0129 11:26:05.187871 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8gnkm" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="registry-server" probeResult="failure" output=< Jan 29 11:26:05 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 11:26:05 crc kubenswrapper[4766]: > Jan 29 11:26:09 crc kubenswrapper[4766]: I0129 11:26:09.862077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bqx75" Jan 29 11:26:13 crc kubenswrapper[4766]: I0129 11:26:13.447200 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:13 crc kubenswrapper[4766]: I0129 11:26:13.520558 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:14 crc kubenswrapper[4766]: I0129 11:26:14.198245 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:26:14 crc kubenswrapper[4766]: I0129 11:26:14.237151 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:26:14 crc kubenswrapper[4766]: I0129 11:26:14.655281 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:26:14 crc kubenswrapper[4766]: I0129 11:26:14.655594 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nr9mw" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="registry-server" containerID="cri-o://9eb1c5d4912a6176f3e7f1b1235819b5c894a7528a603b7bd0eb7c77c6afa7eb" gracePeriod=2 Jan 29 11:26:18 crc kubenswrapper[4766]: I0129 11:26:18.644982 4766 generic.go:334] "Generic (PLEG): container finished" podID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerID="9eb1c5d4912a6176f3e7f1b1235819b5c894a7528a603b7bd0eb7c77c6afa7eb" exitCode=0 Jan 29 11:26:18 crc kubenswrapper[4766]: I0129 11:26:18.645089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerDied","Data":"9eb1c5d4912a6176f3e7f1b1235819b5c894a7528a603b7bd0eb7c77c6afa7eb"} Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.393949 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.471995 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29kcr\" (UniqueName: \"kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr\") pod \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.472141 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content\") pod \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.472211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities\") pod \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\" (UID: \"6da41cd3-3d8e-498a-a988-b5d711bca9d1\") " Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.484773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr" (OuterVolumeSpecName: "kube-api-access-29kcr") pod "6da41cd3-3d8e-498a-a988-b5d711bca9d1" (UID: "6da41cd3-3d8e-498a-a988-b5d711bca9d1"). InnerVolumeSpecName "kube-api-access-29kcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.573531 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29kcr\" (UniqueName: \"kubernetes.io/projected/6da41cd3-3d8e-498a-a988-b5d711bca9d1-kube-api-access-29kcr\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.597978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities" (OuterVolumeSpecName: "utilities") pod "6da41cd3-3d8e-498a-a988-b5d711bca9d1" (UID: "6da41cd3-3d8e-498a-a988-b5d711bca9d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.646821 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6da41cd3-3d8e-498a-a988-b5d711bca9d1" (UID: "6da41cd3-3d8e-498a-a988-b5d711bca9d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.667159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nr9mw" event={"ID":"6da41cd3-3d8e-498a-a988-b5d711bca9d1","Type":"ContainerDied","Data":"bf36cb9f8f0e88167cf9f737d79bbf34e11f5f58a6aca48ae208d9a4d89adf33"} Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.667242 4766 scope.go:117] "RemoveContainer" containerID="9eb1c5d4912a6176f3e7f1b1235819b5c894a7528a603b7bd0eb7c77c6afa7eb" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.667241 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nr9mw" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.675432 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.675509 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da41cd3-3d8e-498a-a988-b5d711bca9d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.688742 4766 scope.go:117] "RemoveContainer" containerID="c25cca66c21c39ecaa52b292272dac881487b8430a7cad3069f391a6cd977c5b" Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.710037 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.716609 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nr9mw"] Jan 29 11:26:21 crc kubenswrapper[4766]: I0129 11:26:21.719027 4766 scope.go:117] "RemoveContainer" containerID="a78b5b662c97e86f238ac8fa2120ef215b98b41d2804578f2d47b35ab0e7265f" Jan 29 11:26:22 crc kubenswrapper[4766]: I0129 11:26:22.678739 4766 generic.go:334] "Generic (PLEG): container finished" podID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerID="3b45b97ee064487185914290de86cdeb80cde56edce4d24d25e4ec123d5c4723" exitCode=0 Jan 29 11:26:22 crc kubenswrapper[4766]: I0129 11:26:22.678844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerDied","Data":"3b45b97ee064487185914290de86cdeb80cde56edce4d24d25e4ec123d5c4723"} Jan 29 11:26:23 crc kubenswrapper[4766]: I0129 11:26:23.242306 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" path="/var/lib/kubelet/pods/6da41cd3-3d8e-498a-a988-b5d711bca9d1/volumes" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.665182 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.665867 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46db8397-242b-4386-be3d-d737aef0c878" containerName="pruner" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.665882 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="46db8397-242b-4386-be3d-d737aef0c878" containerName="pruner" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.665892 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="registry-server" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.665897 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="registry-server" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.665914 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="extract-utilities" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.665920 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="extract-utilities" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.665931 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="extract-content" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.665937 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="extract-content" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.666037 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="46db8397-242b-4386-be3d-d737aef0c878" containerName="pruner" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.666045 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da41cd3-3d8e-498a-a988-b5d711bca9d1" containerName="registry-server" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.666397 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.666569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.667043 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8" gracePeriod=15 Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.667090 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85" gracePeriod=15 Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.667149 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f" gracePeriod=15 Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.667198 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806" gracePeriod=15 Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.667266 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e" gracePeriod=15 Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671378 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671724 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671741 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671759 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671766 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671776 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671782 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671796 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671803 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671816 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671822 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671831 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671837 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.671846 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671851 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671945 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671955 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671964 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671974 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671982 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.671991 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.672000 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:26:37 crc kubenswrapper[4766]: E0129 11:26:37.672103 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.672110 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.711991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712054 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712097 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712130 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712161 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712183 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.712319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.803249 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.803338 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813257 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813397 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813464 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813489 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813572 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813607 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813745 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:37 crc kubenswrapper[4766]: I0129 11:26:37.813947 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.782193 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" containerID="dc41a64e8ad1254e4ab7bb6acc80cf8822fa97599c5db51b692cfaf4d8596ce6" exitCode=0 Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.782315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4a84c5fe-7616-4823-9559-2f1a6dc0237e","Type":"ContainerDied","Data":"dc41a64e8ad1254e4ab7bb6acc80cf8822fa97599c5db51b692cfaf4d8596ce6"} Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.784141 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.784698 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.786817 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.789005 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.790284 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e" exitCode=0 Jan 29 11:26:38 crc kubenswrapper[4766]: I0129 11:26:38.790325 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806" exitCode=2 Jan 29 11:26:39 crc kubenswrapper[4766]: I0129 11:26:39.806230 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:26:39 crc kubenswrapper[4766]: I0129 11:26:39.811496 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:26:39 crc kubenswrapper[4766]: I0129 11:26:39.813238 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85" exitCode=0 Jan 29 11:26:39 crc kubenswrapper[4766]: I0129 11:26:39.813291 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f" exitCode=0 Jan 29 11:26:39 crc kubenswrapper[4766]: I0129 11:26:39.813627 4766 scope.go:117] "RemoveContainer" containerID="0f0252f8e9ab4d4ab528bd6b3a8042e649cc47fe6ac1eebdefbf4cd90cb8c231" Jan 29 11:26:41 crc kubenswrapper[4766]: I0129 11:26:41.827938 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:26:41 crc kubenswrapper[4766]: I0129 11:26:41.828903 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8" exitCode=0 Jan 29 11:26:42 crc kubenswrapper[4766]: E0129 11:26:42.692608 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:42 crc kubenswrapper[4766]: I0129 11:26:42.693143 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.225847 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.226597 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.226812 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.226973 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.227197 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: I0129 11:26:45.227215 4766 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.227465 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 29 11:26:45 crc kubenswrapper[4766]: I0129 11:26:45.227531 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.428439 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 29 11:26:45 crc kubenswrapper[4766]: E0129 11:26:45.851862 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 29 11:26:46 crc kubenswrapper[4766]: E0129 11:26:46.653407 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 29 11:26:47 crc kubenswrapper[4766]: E0129 11:26:47.237500 4766 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" volumeName="registry-storage" Jan 29 11:26:48 crc kubenswrapper[4766]: E0129 11:26:48.254573 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.252314 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.253438 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.317476 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir\") pod \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.317684 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access\") pod \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.317674 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4a84c5fe-7616-4823-9559-2f1a6dc0237e" (UID: "4a84c5fe-7616-4823-9559-2f1a6dc0237e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.317730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock\") pod \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\" (UID: \"4a84c5fe-7616-4823-9559-2f1a6dc0237e\") " Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.317919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock" (OuterVolumeSpecName: "var-lock") pod "4a84c5fe-7616-4823-9559-2f1a6dc0237e" (UID: "4a84c5fe-7616-4823-9559-2f1a6dc0237e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.318115 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.318132 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.322955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4a84c5fe-7616-4823-9559-2f1a6dc0237e" (UID: "4a84c5fe-7616-4823-9559-2f1a6dc0237e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.419916 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a84c5fe-7616-4823-9559-2f1a6dc0237e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.888713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4a84c5fe-7616-4823-9559-2f1a6dc0237e","Type":"ContainerDied","Data":"20d7916b0232c2f57bb5f8c3d043b8778741a0da07f4c3bcdcbcaabaad640907"} Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.888976 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d7916b0232c2f57bb5f8c3d043b8778741a0da07f4c3bcdcbcaabaad640907" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.888790 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:26:50 crc kubenswrapper[4766]: I0129 11:26:50.905392 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.315824 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.316778 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.317393 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.317760 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434252 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434456 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434598 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.434797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.435218 4766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.435248 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.435262 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:51 crc kubenswrapper[4766]: E0129 11:26:51.455610 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 29 11:26:51 crc kubenswrapper[4766]: E0129 11:26:51.764021 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-plg8c.188f300d2f21eb67 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-plg8c,UID:8a615f4a-f498-4abb-be15-10f224ff84df,APIVersion:v1,ResourceVersion:28675,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 31.448s (31.448s including waiting). Image size: 1180692192 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:26:51.763354471 +0000 UTC m=+348.875747512,LastTimestamp:2026-01-29 11:26:51.763354471 +0000 UTC m=+348.875747512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.900901 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.902283 4766 scope.go:117] "RemoveContainer" containerID="81d6b9ab2c5f75cb3a1a6580174135bdbe87b1e341de30ae151d2c7916fb6e85" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.902496 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.903873 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.904454 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.918605 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:51 crc kubenswrapper[4766]: I0129 11:26:51.919492 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.187923 4766 scope.go:117] "RemoveContainer" containerID="964049484efc670285ee54e4f6081c1f719edaa8143966e9762028ad97d2518e" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.226174 4766 scope.go:117] "RemoveContainer" containerID="a3a4c1de706188e9d9c986cf611fcfa0afc2fa6d0d9e45908d9864fbd096fb7f" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.366290 4766 scope.go:117] "RemoveContainer" containerID="1a1895436e31a3a277d7ef40231e37f768d143472a5d055ec3fa3908d59eb806" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.405836 4766 scope.go:117] "RemoveContainer" containerID="c126f1878b27bb8648cebba2334b545a61682575e486c7752447760c630b71f8" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.435975 4766 scope.go:117] "RemoveContainer" containerID="31478a3b6e039686da936ce74edf4d5d7481ee549a80dadbbd57524699b85eca" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.917534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerStarted","Data":"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.920083 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.920340 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.921914 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.924911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"287cf47a22cd5eef2c815eb5b70b79d441f052346440ed5726f5df4276a789c1"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.924975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"54d246ba46780341b4188161c37b9a84112d116fed77c0b657a2f8df57ef5df2"} Jan 29 11:26:52 crc kubenswrapper[4766]: E0129 11:26:52.927698 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.927889 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.928911 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.929283 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.931773 4766 generic.go:334] "Generic (PLEG): container finished" podID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerID="02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.931858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerDied","Data":"02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.933101 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.933807 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.934270 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.934561 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.941626 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.941706 4766 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a" exitCode=1 Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.941812 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.942559 4766 scope.go:117] "RemoveContainer" containerID="ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.945064 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.945697 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.946150 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.946486 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.946856 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.947611 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a615f4a-f498-4abb-be15-10f224ff84df" containerID="10ce103542d68cdff3ae408e7daf504046172cf50410cd7d3b206abb459276ea" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.947704 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerDied","Data":"10ce103542d68cdff3ae408e7daf504046172cf50410cd7d3b206abb459276ea"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.948458 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.948733 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.949123 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.949369 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.949605 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.949819 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.956343 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerStarted","Data":"9128a9bb705d8143f3e3b108dd9b69778f90d66fccfea3699ac54c69b6a3bd5c"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.957858 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.958796 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.959334 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.963518 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.963767 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.963993 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.964365 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.969058 4766 generic.go:334] "Generic (PLEG): container finished" podID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerID="038f1419e5983fb3b980bd0ccfa90f74b513f612ba1990f8f629f58637ee9b7d" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.969167 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerDied","Data":"038f1419e5983fb3b980bd0ccfa90f74b513f612ba1990f8f629f58637ee9b7d"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.971818 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.972151 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.972388 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.972636 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.972866 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.976551 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.977203 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.977488 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.981991 4766 generic.go:334] "Generic (PLEG): container finished" podID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerID="e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.983839 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerDied","Data":"e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f"} Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.985516 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.986534 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.989058 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.990682 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.990980 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.991167 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.991355 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.991546 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:52 crc kubenswrapper[4766]: I0129 11:26:52.991716 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.233274 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.527161 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:26:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:26:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:26:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:26:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.528066 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.528482 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.528646 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.528786 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: E0129 11:26:53.528799 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.991102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerStarted","Data":"c49a5657f4047d3b4ebc585eeb00c9ca7a83e764b486c9e6912d17a4a490c00a"} Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.993067 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.993678 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.994026 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.994324 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.994706 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.995015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerStarted","Data":"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f"} Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.995780 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.996072 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.996346 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.996753 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.997012 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.997336 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.997817 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.998821 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.999104 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.999218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerStarted","Data":"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa"} Jan 29 11:26:53 crc kubenswrapper[4766]: I0129 11:26:53.999474 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:53.999997 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.000533 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.000872 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.001202 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.001538 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.001798 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.002158 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.002520 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.002752 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.003990 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.004123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f5823ee69d46049f12e2b9b8a10c8ff51d0699bfe8779333492c01ed8fc0a3c0"} Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.004981 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.005206 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.005480 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.005839 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.006125 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.006482 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.006695 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.006912 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.007585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerStarted","Data":"f13e58f3874e0f03a18028a1f078d889b05c172817f3173a7e8156921e66571a"} Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.008981 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.009193 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.009471 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.009669 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.009875 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.010091 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.010451 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.010883 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.011397 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerID="a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91" exitCode=0 Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.011466 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerDied","Data":"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91"} Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.012203 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.012430 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.016627 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.017324 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.018106 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.018571 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.018785 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:54 crc kubenswrapper[4766]: I0129 11:26:54.020019 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.019510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerStarted","Data":"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243"} Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.020897 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.021683 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.021990 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.022261 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.022589 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.022883 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.023110 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.023352 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.228158 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.229322 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.229948 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.230282 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.231614 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.232660 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.233042 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:55 crc kubenswrapper[4766]: I0129 11:26:55.233680 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:26:56 crc kubenswrapper[4766]: I0129 11:26:56.792345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:26:57 crc kubenswrapper[4766]: E0129 11:26:57.857367 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="7s" Jan 29 11:26:58 crc kubenswrapper[4766]: I0129 11:26:58.619794 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:26:58 crc kubenswrapper[4766]: I0129 11:26:58.619948 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 11:26:58 crc kubenswrapper[4766]: I0129 11:26:58.620021 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 11:26:59 crc kubenswrapper[4766]: E0129 11:26:59.911139 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-plg8c.188f300d2f21eb67 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-plg8c,UID:8a615f4a-f498-4abb-be15-10f224ff84df,APIVersion:v1,ResourceVersion:28675,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 31.448s (31.448s including waiting). Image size: 1180692192 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:26:51.763354471 +0000 UTC m=+348.875747512,LastTimestamp:2026-01-29 11:26:51.763354471 +0000 UTC m=+348.875747512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.759952 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.760018 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.809497 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.811739 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.812562 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.813151 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.813476 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.814048 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.814269 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.814870 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:00 crc kubenswrapper[4766]: I0129 11:27:00.815832 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.012763 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.012971 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.054221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.054998 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.055370 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.055795 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.056808 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.057873 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.058258 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.058798 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.059106 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.099719 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.100388 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.100923 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.101123 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.101279 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.101470 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.102033 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.103039 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.103086 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.103326 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.103989 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.104861 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.105082 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.105292 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.105553 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.105725 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.105887 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.106039 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.220831 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.220958 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.263813 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.264581 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.265011 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.267220 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.268122 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.268753 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.269085 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.269440 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.269816 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.400509 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.400578 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.444266 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.445143 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.445695 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.445996 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.446332 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.446749 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.447032 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.447308 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:01 crc kubenswrapper[4766]: I0129 11:27:01.447680 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.109463 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.110224 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.110664 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.111168 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.111395 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.111464 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.111861 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.112384 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.113185 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.113502 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.113864 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.114133 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.114645 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.114910 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.115210 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.115672 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.116080 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.116582 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.224592 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.225875 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.226102 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.226760 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.227576 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.228354 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.229227 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.230953 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.231501 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.248884 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.248927 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:02 crc kubenswrapper[4766]: E0129 11:27:02.249583 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.250169 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:02 crc kubenswrapper[4766]: W0129 11:27:02.276608 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-37d003d4a9411cb2c746e7b1cbd42e4e1f803d240003ca204c18d1580dc9aef3 WatchSource:0}: Error finding container 37d003d4a9411cb2c746e7b1cbd42e4e1f803d240003ca204c18d1580dc9aef3: Status 404 returned error can't find the container with id 37d003d4a9411cb2c746e7b1cbd42e4e1f803d240003ca204c18d1580dc9aef3 Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.749591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.750048 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.792114 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.792873 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.793447 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.793956 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.794196 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.794634 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.795262 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.795652 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:02 crc kubenswrapper[4766]: I0129 11:27:02.795970 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.067256 4766 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="99b04231e9c50fdaa627644e82c4ffd2b77b1928174226ad84cdfa4d4c9754c2" exitCode=0 Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.067438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"99b04231e9c50fdaa627644e82c4ffd2b77b1928174226ad84cdfa4d4c9754c2"} Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.067508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37d003d4a9411cb2c746e7b1cbd42e4e1f803d240003ca204c18d1580dc9aef3"} Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.068158 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.068186 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:03 crc kubenswrapper[4766]: E0129 11:27:03.068786 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.068797 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.069341 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.069602 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.069822 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.070019 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.071208 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.071500 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.071780 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.112391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.113262 4766 status_manager.go:851] "Failed to get status for pod" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" pod="openshift-marketplace/certified-operators-6mp9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6mp9b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.113698 4766 status_manager.go:851] "Failed to get status for pod" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" pod="openshift-marketplace/community-operators-tx9nf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tx9nf\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.114060 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.114444 4766 status_manager.go:851] "Failed to get status for pod" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" pod="openshift-marketplace/community-operators-bd99b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bd99b\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.114899 4766 status_manager.go:851] "Failed to get status for pod" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.115224 4766 status_manager.go:851] "Failed to get status for pod" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" pod="openshift-marketplace/redhat-marketplace-plg8c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-plg8c\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.115460 4766 status_manager.go:851] "Failed to get status for pod" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" pod="openshift-marketplace/certified-operators-9bpkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9bpkx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4766]: I0129 11:27:03.115721 4766 status_manager.go:851] "Failed to get status for pod" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" pod="openshift-marketplace/redhat-operators-mpsxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mpsxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 29 11:27:04 crc kubenswrapper[4766]: I0129 11:27:04.075726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"71caa2a473084c662e859f331a1dd56f327070e679f80eb745b762efff016f83"} Jan 29 11:27:04 crc kubenswrapper[4766]: I0129 11:27:04.076442 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e318a8336f91d793421c7a4f187bca5bc60830085e883972c4e2070d950fefa6"} Jan 29 11:27:04 crc kubenswrapper[4766]: I0129 11:27:04.595764 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:27:04 crc kubenswrapper[4766]: I0129 11:27:04.595840 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:27:04 crc kubenswrapper[4766]: I0129 11:27:04.651427 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:27:05 crc kubenswrapper[4766]: I0129 11:27:05.087830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b647a8349a6801f84cfa215b501c623d8450eaa237945ebbe9e933ec5d5c17eb"} Jan 29 11:27:05 crc kubenswrapper[4766]: I0129 11:27:05.147226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:27:06 crc kubenswrapper[4766]: I0129 11:27:06.099965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6d0067134e0de37dbfd2fe9ef754bee5d7e68b4172747c0654bfb1f530cc078"} Jan 29 11:27:08 crc kubenswrapper[4766]: I0129 11:27:08.116115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"95191aa0330b0d33eb7274e9bba1d090f1e271c6a337618b11ae21dfa92e3433"} Jan 29 11:27:08 crc kubenswrapper[4766]: I0129 11:27:08.620547 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 11:27:08 crc kubenswrapper[4766]: I0129 11:27:08.620632 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 11:27:09 crc kubenswrapper[4766]: I0129 11:27:09.128037 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:09 crc kubenswrapper[4766]: I0129 11:27:09.128744 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:09 crc kubenswrapper[4766]: I0129 11:27:09.128049 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:09 crc kubenswrapper[4766]: I0129 11:27:09.144952 4766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:27:09 crc kubenswrapper[4766]: I0129 11:27:09.918022 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5207c3a2-29f4-485e-bedf-1e2432ee0ffd" Jan 29 11:27:10 crc kubenswrapper[4766]: I0129 11:27:10.132457 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:10 crc kubenswrapper[4766]: I0129 11:27:10.133457 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e5dc50cb-2d41-45cd-8a3d-615212a20120" Jan 29 11:27:10 crc kubenswrapper[4766]: I0129 11:27:10.134476 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5207c3a2-29f4-485e-bedf-1e2432ee0ffd" Jan 29 11:27:18 crc kubenswrapper[4766]: I0129 11:27:18.620648 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 11:27:18 crc kubenswrapper[4766]: I0129 11:27:18.621247 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 11:27:18 crc kubenswrapper[4766]: I0129 11:27:18.621313 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:27:18 crc kubenswrapper[4766]: I0129 11:27:18.621993 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f5823ee69d46049f12e2b9b8a10c8ff51d0699bfe8779333492c01ed8fc0a3c0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 11:27:18 crc kubenswrapper[4766]: I0129 11:27:18.622084 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f5823ee69d46049f12e2b9b8a10c8ff51d0699bfe8779333492c01ed8fc0a3c0" gracePeriod=30 Jan 29 11:27:39 crc kubenswrapper[4766]: I0129 11:27:39.746536 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:27:39 crc kubenswrapper[4766]: I0129 11:27:39.830283 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 11:27:40 crc kubenswrapper[4766]: I0129 11:27:40.068220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:27:40 crc kubenswrapper[4766]: I0129 11:27:40.990188 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:27:41 crc kubenswrapper[4766]: I0129 11:27:41.666151 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:27:41 crc kubenswrapper[4766]: I0129 11:27:41.863360 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.024005 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.163234 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.252972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.511030 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.579913 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.742333 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:27:42 crc kubenswrapper[4766]: I0129 11:27:42.891386 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.246487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.271052 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.285496 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.361626 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.525479 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.686263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:27:43 crc kubenswrapper[4766]: I0129 11:27:43.956493 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.007562 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.152307 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.213790 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.295187 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.384751 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.505949 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.614018 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.867850 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:27:44 crc kubenswrapper[4766]: I0129 11:27:44.931958 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.133986 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.426223 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.501433 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.686657 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.773909 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.811335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.946790 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:27:45 crc kubenswrapper[4766]: I0129 11:27:45.966184 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.127932 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.191915 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.208091 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.343190 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.344251 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.385180 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.490580 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.736268 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:27:46 crc kubenswrapper[4766]: I0129 11:27:46.954325 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.087580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.255175 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.372976 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.374452 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.470302 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.483781 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.511959 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.537176 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.549903 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.596653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.616007 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.846765 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.896340 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:27:47 crc kubenswrapper[4766]: I0129 11:27:47.938881 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.014435 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.019919 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.142020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.158917 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.200700 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.539307 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:27:48 crc kubenswrapper[4766]: I0129 11:27:48.727439 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.129218 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.132103 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.205076 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.235242 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.271872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.328647 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.372664 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.376110 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.376156 4766 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f5823ee69d46049f12e2b9b8a10c8ff51d0699bfe8779333492c01ed8fc0a3c0" exitCode=137 Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.376191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f5823ee69d46049f12e2b9b8a10c8ff51d0699bfe8779333492c01ed8fc0a3c0"} Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.376225 4766 scope.go:117] "RemoveContainer" containerID="ec6eeec32db3cd97e718206000b41183351e1698186a661547746982cef1518a" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.403260 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.451746 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.457063 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.641002 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.663871 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.728289 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:27:49 crc kubenswrapper[4766]: I0129 11:27:49.762039 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.078913 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.167561 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.218460 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.347668 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.384235 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.385638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3faa3f69154a3830f941a0c57a53335e95004892827ea6326cdb5e89a756b67f"} Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.516946 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.716488 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.755778 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.874287 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:27:50 crc kubenswrapper[4766]: I0129 11:27:50.877747 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.022566 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.050371 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.076269 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.405344 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.503630 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.533992 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.611092 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:27:51 crc kubenswrapper[4766]: I0129 11:27:51.935299 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.033524 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.099804 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.260862 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.599922 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.624504 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.697558 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.746747 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.841083 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.908919 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.909627 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.910105 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:27:52 crc kubenswrapper[4766]: I0129 11:27:52.964599 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.065372 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.124762 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.210656 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.455523 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.533707 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.666078 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.720808 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:27:53 crc kubenswrapper[4766]: I0129 11:27:53.904742 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.064570 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.092897 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.139767 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.315579 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.330128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.368513 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.551493 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.552392 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.666905 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.696847 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:27:54 crc kubenswrapper[4766]: I0129 11:27:54.981866 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.009823 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.049886 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.049930 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.087952 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.142719 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.612495 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.672403 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.680352 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.723285 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.854385 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 11:27:55 crc kubenswrapper[4766]: I0129 11:27:55.901486 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.647686 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.678448 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.688508 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.698206 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.746127 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.792783 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.886134 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.899999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:27:56 crc kubenswrapper[4766]: I0129 11:27:56.994053 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.058092 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.291613 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.306724 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.397824 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.398257 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.463558 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.599646 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.687702 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 11:27:57 crc kubenswrapper[4766]: I0129 11:27:57.916328 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.015467 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.042010 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.080256 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.086351 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.124917 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.130797 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.290384 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.314121 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.335986 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.372646 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.620508 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.628933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.715250 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.827120 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.831849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:27:58 crc kubenswrapper[4766]: I0129 11:27:58.965963 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.338263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.365253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.438647 4766 generic.go:334] "Generic (PLEG): container finished" podID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerID="c862f98590c008134e2625f528cc31e05a05fa60a5b1d0e409b8ea4638f7a33d" exitCode=0 Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.438775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerDied","Data":"c862f98590c008134e2625f528cc31e05a05fa60a5b1d0e409b8ea4638f7a33d"} Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.440223 4766 scope.go:117] "RemoveContainer" containerID="c862f98590c008134e2625f528cc31e05a05fa60a5b1d0e409b8ea4638f7a33d" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.445730 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.510736 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.556460 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.615828 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.625720 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.632428 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:27:59 crc kubenswrapper[4766]: I0129 11:27:59.872879 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.001177 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.173759 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.412527 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.433475 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.446216 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztc7c_72cf9723-cba4-4f3b-90c4-c8b919e9b7a8/marketplace-operator/1.log" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.446842 4766 generic.go:334] "Generic (PLEG): container finished" podID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" exitCode=1 Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.446877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerDied","Data":"46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c"} Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.447066 4766 scope.go:117] "RemoveContainer" containerID="c862f98590c008134e2625f528cc31e05a05fa60a5b1d0e409b8ea4638f7a33d" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.447878 4766 scope.go:117] "RemoveContainer" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" Jan 29 11:28:00 crc kubenswrapper[4766]: E0129 11:28:00.448292 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-ztc7c_openshift-marketplace(72cf9723-cba4-4f3b-90c4-c8b919e9b7a8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.665313 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.665490 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.741477 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.745508 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.823312 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:28:00 crc kubenswrapper[4766]: I0129 11:28:00.857259 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.040098 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.390652 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.455500 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztc7c_72cf9723-cba4-4f3b-90c4-c8b919e9b7a8/marketplace-operator/1.log" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.578054 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.757898 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.758254 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.758880 4766 scope.go:117] "RemoveContainer" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" Jan 29 11:28:01 crc kubenswrapper[4766]: E0129 11:28:01.759284 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-ztc7c_openshift-marketplace(72cf9723-cba4-4f3b-90c4-c8b919e9b7a8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.784278 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.825030 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.867319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.879306 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:28:01 crc kubenswrapper[4766]: I0129 11:28:01.935596 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.082253 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.224765 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.320060 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.345972 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.459312 4766 scope.go:117] "RemoveContainer" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" Jan 29 11:28:02 crc kubenswrapper[4766]: E0129 11:28:02.459517 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-ztc7c_openshift-marketplace(72cf9723-cba4-4f3b-90c4-c8b919e9b7a8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" Jan 29 11:28:02 crc kubenswrapper[4766]: I0129 11:28:02.589917 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.097238 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.131210 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.137241 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.407794 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.548059 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.555799 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.694098 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.731600 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.886946 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:28:03 crc kubenswrapper[4766]: I0129 11:28:03.917987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.129175 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.354090 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.480670 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.540576 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.753946 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.761640 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.896733 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:28:04 crc kubenswrapper[4766]: I0129 11:28:04.960234 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.211982 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.232665 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.233539 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd99b" podStartSLOduration=72.694844368 podStartE2EDuration="4m4.233521877s" podCreationTimestamp="2026-01-29 11:24:01 +0000 UTC" firstStartedPulling="2026-01-29 11:24:02.083931339 +0000 UTC m=+179.196324360" lastFinishedPulling="2026-01-29 11:26:53.622608858 +0000 UTC m=+350.735001869" observedRunningTime="2026-01-29 11:27:09.875216218 +0000 UTC m=+366.987609229" watchObservedRunningTime="2026-01-29 11:28:05.233521877 +0000 UTC m=+422.345914888" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.233727 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpsxm" podStartSLOduration=72.816540196 podStartE2EDuration="4m1.233720853s" podCreationTimestamp="2026-01-29 11:24:04 +0000 UTC" firstStartedPulling="2026-01-29 11:24:06.317001797 +0000 UTC m=+183.429394808" lastFinishedPulling="2026-01-29 11:26:54.734182454 +0000 UTC m=+351.846575465" observedRunningTime="2026-01-29 11:27:09.830220222 +0000 UTC m=+366.942613233" watchObservedRunningTime="2026-01-29 11:28:05.233720853 +0000 UTC m=+422.346113884" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.233976 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9bpkx" podStartSLOduration=75.18393449 podStartE2EDuration="4m5.23397142s" podCreationTimestamp="2026-01-29 11:24:00 +0000 UTC" firstStartedPulling="2026-01-29 11:24:02.10430192 +0000 UTC m=+179.216694931" lastFinishedPulling="2026-01-29 11:26:52.15433884 +0000 UTC m=+349.266731861" observedRunningTime="2026-01-29 11:27:09.816977327 +0000 UTC m=+366.929370338" watchObservedRunningTime="2026-01-29 11:28:05.23397142 +0000 UTC m=+422.346364431" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.234102 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plg8c" podStartSLOduration=75.019764247 podStartE2EDuration="4m3.234098354s" podCreationTimestamp="2026-01-29 11:24:02 +0000 UTC" firstStartedPulling="2026-01-29 11:24:05.203897709 +0000 UTC m=+182.316290720" lastFinishedPulling="2026-01-29 11:26:53.418231816 +0000 UTC m=+350.530624827" observedRunningTime="2026-01-29 11:27:09.778732687 +0000 UTC m=+366.891125728" watchObservedRunningTime="2026-01-29 11:28:05.234098354 +0000 UTC m=+422.346491365" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.236358 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6mp9b" podStartSLOduration=73.743368382 podStartE2EDuration="4m5.236349042s" podCreationTimestamp="2026-01-29 11:24:00 +0000 UTC" firstStartedPulling="2026-01-29 11:24:02.117193317 +0000 UTC m=+179.229586328" lastFinishedPulling="2026-01-29 11:26:53.610173967 +0000 UTC m=+350.722566988" observedRunningTime="2026-01-29 11:27:09.846428992 +0000 UTC m=+366.958822003" watchObservedRunningTime="2026-01-29 11:28:05.236349042 +0000 UTC m=+422.348742053" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.236872 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tx9nf" podStartSLOduration=73.794575611 podStartE2EDuration="4m5.236866427s" podCreationTimestamp="2026-01-29 11:24:00 +0000 UTC" firstStartedPulling="2026-01-29 11:24:02.11306813 +0000 UTC m=+179.225461141" lastFinishedPulling="2026-01-29 11:26:53.555358946 +0000 UTC m=+350.667751957" observedRunningTime="2026-01-29 11:27:09.860541052 +0000 UTC m=+366.972934063" watchObservedRunningTime="2026-01-29 11:28:05.236866427 +0000 UTC m=+422.349259448" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.241802 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.241854 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.245964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.298598 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=56.298580182 podStartE2EDuration="56.298580182s" podCreationTimestamp="2026-01-29 11:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:05.286128919 +0000 UTC m=+422.398521950" watchObservedRunningTime="2026-01-29 11:28:05.298580182 +0000 UTC m=+422.410973193" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.299569 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=12.299564631 podStartE2EDuration="12.299564631s" podCreationTimestamp="2026-01-29 11:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:05.297622383 +0000 UTC m=+422.410015414" watchObservedRunningTime="2026-01-29 11:28:05.299564631 +0000 UTC m=+422.411957642" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.382124 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.388884 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.436943 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.614810 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.681607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 11:28:05 crc kubenswrapper[4766]: I0129 11:28:05.918306 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.240193 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.250923 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.528282 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.597946 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.796541 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 11:28:06 crc kubenswrapper[4766]: I0129 11:28:06.876888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.059491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.179211 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.251432 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.251492 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.253425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.254920 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.490920 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.539154 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.540311 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.684894 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:28:07 crc kubenswrapper[4766]: I0129 11:28:07.803184 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:28:08 crc kubenswrapper[4766]: I0129 11:28:08.076756 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:28:08 crc kubenswrapper[4766]: I0129 11:28:08.397929 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:28:08 crc kubenswrapper[4766]: I0129 11:28:08.689651 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:28:08 crc kubenswrapper[4766]: I0129 11:28:08.888438 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:28:09 crc kubenswrapper[4766]: I0129 11:28:09.494267 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:28:11 crc kubenswrapper[4766]: I0129 11:28:11.508045 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:28:11 crc kubenswrapper[4766]: I0129 11:28:11.713336 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:28:11 crc kubenswrapper[4766]: I0129 11:28:11.801629 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:28:12 crc kubenswrapper[4766]: I0129 11:28:12.005979 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:28:12 crc kubenswrapper[4766]: I0129 11:28:12.203942 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:28:12 crc kubenswrapper[4766]: I0129 11:28:12.523207 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 11:28:13 crc kubenswrapper[4766]: I0129 11:28:13.637276 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.227320 4766 scope.go:117] "RemoveContainer" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.529121 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztc7c_72cf9723-cba4-4f3b-90c4-c8b919e9b7a8/marketplace-operator/1.log" Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.529191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerStarted","Data":"11b096c9f2105a2d593c3bc6034399a160aeb36772d70712f82e2a14692dc61a"} Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.529634 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.532020 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ztc7c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 29 11:28:15 crc kubenswrapper[4766]: I0129 11:28:15.532204 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 29 11:28:16 crc kubenswrapper[4766]: I0129 11:28:16.361915 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:28:16 crc kubenswrapper[4766]: I0129 11:28:16.362268 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:28:16 crc kubenswrapper[4766]: I0129 11:28:16.486113 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:28:16 crc kubenswrapper[4766]: I0129 11:28:16.486373 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://287cf47a22cd5eef2c815eb5b70b79d441f052346440ed5726f5df4276a789c1" gracePeriod=5 Jan 29 11:28:16 crc kubenswrapper[4766]: I0129 11:28:16.536766 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:28:19 crc kubenswrapper[4766]: I0129 11:28:19.621746 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:28:19 crc kubenswrapper[4766]: I0129 11:28:19.621979 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" podUID="688635bd-e6d5-43bc-a5b8-21f485a3621b" containerName="controller-manager" containerID="cri-o://a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575" gracePeriod=30 Jan 29 11:28:19 crc kubenswrapper[4766]: I0129 11:28:19.716843 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:28:19 crc kubenswrapper[4766]: I0129 11:28:19.717093 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" podUID="96faba9a-6377-4d35-8809-ec064f590a37" containerName="route-controller-manager" containerID="cri-o://a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7" gracePeriod=30 Jan 29 11:28:19 crc kubenswrapper[4766]: I0129 11:28:19.944649 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.058541 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles\") pod \"688635bd-e6d5-43bc-a5b8-21f485a3621b\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.058633 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert\") pod \"688635bd-e6d5-43bc-a5b8-21f485a3621b\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.058660 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca\") pod \"688635bd-e6d5-43bc-a5b8-21f485a3621b\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.058693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config\") pod \"688635bd-e6d5-43bc-a5b8-21f485a3621b\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.058762 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skbj5\" (UniqueName: \"kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5\") pod \"688635bd-e6d5-43bc-a5b8-21f485a3621b\" (UID: \"688635bd-e6d5-43bc-a5b8-21f485a3621b\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.059489 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca" (OuterVolumeSpecName: "client-ca") pod "688635bd-e6d5-43bc-a5b8-21f485a3621b" (UID: "688635bd-e6d5-43bc-a5b8-21f485a3621b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.059571 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config" (OuterVolumeSpecName: "config") pod "688635bd-e6d5-43bc-a5b8-21f485a3621b" (UID: "688635bd-e6d5-43bc-a5b8-21f485a3621b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.060013 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "688635bd-e6d5-43bc-a5b8-21f485a3621b" (UID: "688635bd-e6d5-43bc-a5b8-21f485a3621b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.061945 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.064212 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5" (OuterVolumeSpecName: "kube-api-access-skbj5") pod "688635bd-e6d5-43bc-a5b8-21f485a3621b" (UID: "688635bd-e6d5-43bc-a5b8-21f485a3621b"). InnerVolumeSpecName "kube-api-access-skbj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.064589 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "688635bd-e6d5-43bc-a5b8-21f485a3621b" (UID: "688635bd-e6d5-43bc-a5b8-21f485a3621b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.159750 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca\") pod \"96faba9a-6377-4d35-8809-ec064f590a37\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.159920 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert\") pod \"96faba9a-6377-4d35-8809-ec064f590a37\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.159949 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqrwk\" (UniqueName: \"kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk\") pod \"96faba9a-6377-4d35-8809-ec064f590a37\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.159986 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config\") pod \"96faba9a-6377-4d35-8809-ec064f590a37\" (UID: \"96faba9a-6377-4d35-8809-ec064f590a37\") " Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160255 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160278 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688635bd-e6d5-43bc-a5b8-21f485a3621b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160290 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160301 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688635bd-e6d5-43bc-a5b8-21f485a3621b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160312 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skbj5\" (UniqueName: \"kubernetes.io/projected/688635bd-e6d5-43bc-a5b8-21f485a3621b-kube-api-access-skbj5\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160689 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca" (OuterVolumeSpecName: "client-ca") pod "96faba9a-6377-4d35-8809-ec064f590a37" (UID: "96faba9a-6377-4d35-8809-ec064f590a37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.160762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config" (OuterVolumeSpecName: "config") pod "96faba9a-6377-4d35-8809-ec064f590a37" (UID: "96faba9a-6377-4d35-8809-ec064f590a37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.163590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96faba9a-6377-4d35-8809-ec064f590a37" (UID: "96faba9a-6377-4d35-8809-ec064f590a37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.163930 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk" (OuterVolumeSpecName: "kube-api-access-vqrwk") pod "96faba9a-6377-4d35-8809-ec064f590a37" (UID: "96faba9a-6377-4d35-8809-ec064f590a37"). InnerVolumeSpecName "kube-api-access-vqrwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.261460 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96faba9a-6377-4d35-8809-ec064f590a37-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.261795 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqrwk\" (UniqueName: \"kubernetes.io/projected/96faba9a-6377-4d35-8809-ec064f590a37-kube-api-access-vqrwk\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.261814 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.261826 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96faba9a-6377-4d35-8809-ec064f590a37-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.556099 4766 generic.go:334] "Generic (PLEG): container finished" podID="688635bd-e6d5-43bc-a5b8-21f485a3621b" containerID="a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575" exitCode=0 Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.556175 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" event={"ID":"688635bd-e6d5-43bc-a5b8-21f485a3621b","Type":"ContainerDied","Data":"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575"} Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.556211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" event={"ID":"688635bd-e6d5-43bc-a5b8-21f485a3621b","Type":"ContainerDied","Data":"9ad29b0bcf980cb4f06e39a023eef478a1687b07dcf3cf5cab340009af0a2257"} Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.556230 4766 scope.go:117] "RemoveContainer" containerID="a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.556393 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d558b78b6-6psxn" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.564093 4766 generic.go:334] "Generic (PLEG): container finished" podID="96faba9a-6377-4d35-8809-ec064f590a37" containerID="a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7" exitCode=0 Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.564142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" event={"ID":"96faba9a-6377-4d35-8809-ec064f590a37","Type":"ContainerDied","Data":"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7"} Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.564149 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.564170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k" event={"ID":"96faba9a-6377-4d35-8809-ec064f590a37","Type":"ContainerDied","Data":"cec6fdd5d9650ef5362d79977023b4b3e64fc2232afa22911e89b2cc33bc7a51"} Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.576165 4766 scope.go:117] "RemoveContainer" containerID="a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575" Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.576719 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575\": container with ID starting with a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575 not found: ID does not exist" containerID="a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.576758 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575"} err="failed to get container status \"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575\": rpc error: code = NotFound desc = could not find container \"a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575\": container with ID starting with a04645725781d379170cf66e541cb2f1ed3ff03900fbee06a34a72ccc6c9a575 not found: ID does not exist" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.576790 4766 scope.go:117] "RemoveContainer" containerID="a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.595101 4766 scope.go:117] "RemoveContainer" containerID="a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7" Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.595807 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7\": container with ID starting with a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7 not found: ID does not exist" containerID="a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.595873 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7"} err="failed to get container status \"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7\": rpc error: code = NotFound desc = could not find container \"a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7\": container with ID starting with a8202c636a4ca87ac0b15fb87f19db9c51edea56dbedad933ac28c96d05a23c7 not found: ID does not exist" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.609527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.613633 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d558b78b6-6psxn"] Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.623769 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.627910 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d67997fd-npc4k"] Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752276 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.752521 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96faba9a-6377-4d35-8809-ec064f590a37" containerName="route-controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752538 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="96faba9a-6377-4d35-8809-ec064f590a37" containerName="route-controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.752549 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752560 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.752577 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688635bd-e6d5-43bc-a5b8-21f485a3621b" containerName="controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752587 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="688635bd-e6d5-43bc-a5b8-21f485a3621b" containerName="controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: E0129 11:28:20.752597 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" containerName="installer" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752602 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" containerName="installer" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752773 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="688635bd-e6d5-43bc-a5b8-21f485a3621b" containerName="controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752789 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="96faba9a-6377-4d35-8809-ec064f590a37" containerName="route-controller-manager" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752802 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a84c5fe-7616-4823-9559-2f1a6dc0237e" containerName="installer" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.752813 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.753237 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.759366 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.759399 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.759646 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.760282 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.760297 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.764888 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.774813 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.779643 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.869448 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.869552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.869577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.869601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cgl\" (UniqueName: \"kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.869621 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.971112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.971193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.971231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cgl\" (UniqueName: \"kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.971256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.971281 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.972954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.973643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.974978 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.980821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:20 crc kubenswrapper[4766]: I0129 11:28:20.992715 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cgl\" (UniqueName: \"kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl\") pod \"controller-manager-7f4c6d6c58-8bf7j\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.098328 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.238334 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688635bd-e6d5-43bc-a5b8-21f485a3621b" path="/var/lib/kubelet/pods/688635bd-e6d5-43bc-a5b8-21f485a3621b/volumes" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.239984 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96faba9a-6377-4d35-8809-ec064f590a37" path="/var/lib/kubelet/pods/96faba9a-6377-4d35-8809-ec064f590a37/volumes" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.286180 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.572285 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" event={"ID":"e174bfb2-e9f9-4add-ae12-f4a024e975f4","Type":"ContainerStarted","Data":"0aef894419050a1fb466ed0ff796ae2074611bb8955b7da81de5a29fcfad3d58"} Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.572332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" event={"ID":"e174bfb2-e9f9-4add-ae12-f4a024e975f4","Type":"ContainerStarted","Data":"8864e70689ab1bdd686110f22aaf1b581402c6185eee8e857d3f7bf9ea16a745"} Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.573621 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.576740 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.576863 4766 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="287cf47a22cd5eef2c815eb5b70b79d441f052346440ed5726f5df4276a789c1" exitCode=137 Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.579683 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.592795 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" podStartSLOduration=2.592775668 podStartE2EDuration="2.592775668s" podCreationTimestamp="2026-01-29 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:21.592648664 +0000 UTC m=+438.705041675" watchObservedRunningTime="2026-01-29 11:28:21.592775668 +0000 UTC m=+438.705168689" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.754360 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.755197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.757397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.757675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.757890 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.757998 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.758263 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.758396 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.768263 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.804891 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99sn9\" (UniqueName: \"kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.804954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.804996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.805019 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.906530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.906587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.906605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.906669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99sn9\" (UniqueName: \"kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.908035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.908041 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.913664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:21 crc kubenswrapper[4766]: I0129 11:28:21.921225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99sn9\" (UniqueName: \"kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9\") pod \"route-controller-manager-5486d5646-nkkmx\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.054026 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.054115 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.068506 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109165 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109364 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109430 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109458 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109526 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.109750 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.110325 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.110350 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.110402 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.110447 4766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.115211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.211848 4766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.273131 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.584277 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.584377 4766 scope.go:117] "RemoveContainer" containerID="287cf47a22cd5eef2c815eb5b70b79d441f052346440ed5726f5df4276a789c1" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.584610 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:28:22 crc kubenswrapper[4766]: I0129 11:28:22.586997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" event={"ID":"b592b00c-fc1b-40b5-bca9-dcb11ae8a698","Type":"ContainerStarted","Data":"2dd3a7e503d50ef0d9d583aad09298a3a2442d19242aa54f2c55f66fc27281bc"} Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.232714 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.233727 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.244723 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.244770 4766 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3a7674e4-158b-4b77-abd2-19578bb51803" Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.248200 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.248245 4766 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3a7674e4-158b-4b77-abd2-19578bb51803" Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.598754 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" event={"ID":"b592b00c-fc1b-40b5-bca9-dcb11ae8a698","Type":"ContainerStarted","Data":"5cd4d334ee4dea726355d3ce96886bc17603ee91a31fc98e581fc2e86ed32f57"} Jan 29 11:28:23 crc kubenswrapper[4766]: I0129 11:28:23.621487 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" podStartSLOduration=4.621450984 podStartE2EDuration="4.621450984s" podCreationTimestamp="2026-01-29 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:23.619027031 +0000 UTC m=+440.731420042" watchObservedRunningTime="2026-01-29 11:28:23.621450984 +0000 UTC m=+440.733843995" Jan 29 11:28:24 crc kubenswrapper[4766]: I0129 11:28:24.605106 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:24 crc kubenswrapper[4766]: I0129 11:28:24.611362 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.090156 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.090905 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" podUID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" containerName="controller-manager" containerID="cri-o://0aef894419050a1fb466ed0ff796ae2074611bb8955b7da81de5a29fcfad3d58" gracePeriod=30 Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.109593 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.109852 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" podUID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" containerName="route-controller-manager" containerID="cri-o://5cd4d334ee4dea726355d3ce96886bc17603ee91a31fc98e581fc2e86ed32f57" gracePeriod=30 Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.706846 4766 generic.go:334] "Generic (PLEG): container finished" podID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" containerID="0aef894419050a1fb466ed0ff796ae2074611bb8955b7da81de5a29fcfad3d58" exitCode=0 Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.707001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" event={"ID":"e174bfb2-e9f9-4add-ae12-f4a024e975f4","Type":"ContainerDied","Data":"0aef894419050a1fb466ed0ff796ae2074611bb8955b7da81de5a29fcfad3d58"} Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.709659 4766 generic.go:334] "Generic (PLEG): container finished" podID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" containerID="5cd4d334ee4dea726355d3ce96886bc17603ee91a31fc98e581fc2e86ed32f57" exitCode=0 Jan 29 11:28:42 crc kubenswrapper[4766]: I0129 11:28:42.709711 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" event={"ID":"b592b00c-fc1b-40b5-bca9-dcb11ae8a698","Type":"ContainerDied","Data":"5cd4d334ee4dea726355d3ce96886bc17603ee91a31fc98e581fc2e86ed32f57"} Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.113698 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.191990 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.229893 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config\") pod \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.229976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert\") pod \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.230063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99sn9\" (UniqueName: \"kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9\") pod \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.230090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca\") pod \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\" (UID: \"b592b00c-fc1b-40b5-bca9-dcb11ae8a698\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.231249 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca" (OuterVolumeSpecName: "client-ca") pod "b592b00c-fc1b-40b5-bca9-dcb11ae8a698" (UID: "b592b00c-fc1b-40b5-bca9-dcb11ae8a698"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.231548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config" (OuterVolumeSpecName: "config") pod "b592b00c-fc1b-40b5-bca9-dcb11ae8a698" (UID: "b592b00c-fc1b-40b5-bca9-dcb11ae8a698"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.237108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b592b00c-fc1b-40b5-bca9-dcb11ae8a698" (UID: "b592b00c-fc1b-40b5-bca9-dcb11ae8a698"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.237288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9" (OuterVolumeSpecName: "kube-api-access-99sn9") pod "b592b00c-fc1b-40b5-bca9-dcb11ae8a698" (UID: "b592b00c-fc1b-40b5-bca9-dcb11ae8a698"). InnerVolumeSpecName "kube-api-access-99sn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.331514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config\") pod \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.332068 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca\") pod \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.332268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cgl\" (UniqueName: \"kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl\") pod \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.332382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles\") pod \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.332535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config" (OuterVolumeSpecName: "config") pod "e174bfb2-e9f9-4add-ae12-f4a024e975f4" (UID: "e174bfb2-e9f9-4add-ae12-f4a024e975f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.332568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert\") pod \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\" (UID: \"e174bfb2-e9f9-4add-ae12-f4a024e975f4\") " Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333197 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e174bfb2-e9f9-4add-ae12-f4a024e975f4" (UID: "e174bfb2-e9f9-4add-ae12-f4a024e975f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333490 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333510 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333521 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333532 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333545 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99sn9\" (UniqueName: \"kubernetes.io/projected/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-kube-api-access-99sn9\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333553 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b592b00c-fc1b-40b5-bca9-dcb11ae8a698-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.333583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "e174bfb2-e9f9-4add-ae12-f4a024e975f4" (UID: "e174bfb2-e9f9-4add-ae12-f4a024e975f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.337060 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e174bfb2-e9f9-4add-ae12-f4a024e975f4" (UID: "e174bfb2-e9f9-4add-ae12-f4a024e975f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.337076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl" (OuterVolumeSpecName: "kube-api-access-w2cgl") pod "e174bfb2-e9f9-4add-ae12-f4a024e975f4" (UID: "e174bfb2-e9f9-4add-ae12-f4a024e975f4"). InnerVolumeSpecName "kube-api-access-w2cgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.434908 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e174bfb2-e9f9-4add-ae12-f4a024e975f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.434952 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cgl\" (UniqueName: \"kubernetes.io/projected/e174bfb2-e9f9-4add-ae12-f4a024e975f4-kube-api-access-w2cgl\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.434963 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e174bfb2-e9f9-4add-ae12-f4a024e975f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.718068 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" event={"ID":"e174bfb2-e9f9-4add-ae12-f4a024e975f4","Type":"ContainerDied","Data":"8864e70689ab1bdd686110f22aaf1b581402c6185eee8e857d3f7bf9ea16a745"} Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.718138 4766 scope.go:117] "RemoveContainer" containerID="0aef894419050a1fb466ed0ff796ae2074611bb8955b7da81de5a29fcfad3d58" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.718099 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.721209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" event={"ID":"b592b00c-fc1b-40b5-bca9-dcb11ae8a698","Type":"ContainerDied","Data":"2dd3a7e503d50ef0d9d583aad09298a3a2442d19242aa54f2c55f66fc27281bc"} Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.721430 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.737356 4766 scope.go:117] "RemoveContainer" containerID="5cd4d334ee4dea726355d3ce96886bc17603ee91a31fc98e581fc2e86ed32f57" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.755388 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.768967 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5486d5646-nkkmx"] Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.775253 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.781227 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f4c6d6c58-8bf7j"] Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.785241 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:28:43 crc kubenswrapper[4766]: E0129 11:28:43.785645 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" containerName="controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.785729 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" containerName="controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: E0129 11:28:43.785804 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" containerName="route-controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.785856 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" containerName="route-controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.786016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" containerName="controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.786139 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" containerName="route-controller-manager" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.787704 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.791304 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.795657 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.796670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.796889 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.797016 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.797108 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.797583 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.944900 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t59k\" (UniqueName: \"kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.945017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.945094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:43 crc kubenswrapper[4766]: I0129 11:28:43.945218 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.046203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t59k\" (UniqueName: \"kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.046272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.046333 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.046376 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.048193 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.049072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.051874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.080623 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t59k\" (UniqueName: \"kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k\") pod \"route-controller-manager-58599c79bf-pkqzn\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.108226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.291316 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.728272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" event={"ID":"a83e7e13-cb6b-4360-a57e-5fcf24f14286","Type":"ContainerStarted","Data":"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98"} Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.728312 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" event={"ID":"a83e7e13-cb6b-4360-a57e-5fcf24f14286","Type":"ContainerStarted","Data":"7b980399c001815adbe8b6513e0123139388ce412a070d920124c9251d792f2c"} Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.728674 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.748722 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" podStartSLOduration=2.748704428 podStartE2EDuration="2.748704428s" podCreationTimestamp="2026-01-29 11:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:44.74535014 +0000 UTC m=+461.857743181" watchObservedRunningTime="2026-01-29 11:28:44.748704428 +0000 UTC m=+461.861097439" Jan 29 11:28:44 crc kubenswrapper[4766]: I0129 11:28:44.752298 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.235499 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b592b00c-fc1b-40b5-bca9-dcb11ae8a698" path="/var/lib/kubelet/pods/b592b00c-fc1b-40b5-bca9-dcb11ae8a698/volumes" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.236260 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e174bfb2-e9f9-4add-ae12-f4a024e975f4" path="/var/lib/kubelet/pods/e174bfb2-e9f9-4add-ae12-f4a024e975f4/volumes" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.772196 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.773088 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.774724 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.775675 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.775922 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.776387 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.777870 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.780296 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.784618 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.786061 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.870431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.870505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.870536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.870564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.870611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsg8z\" (UniqueName: \"kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.972107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.972829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.972947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.973039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsg8z\" (UniqueName: \"kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.973093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.973382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.974057 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.974553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.982376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:45 crc kubenswrapper[4766]: I0129 11:28:45.998808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsg8z\" (UniqueName: \"kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z\") pod \"controller-manager-54ffb9c8c5-mn9gz\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.091057 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.362039 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.362563 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.473951 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:28:46 crc kubenswrapper[4766]: W0129 11:28:46.479308 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2fbe4ba_2517_4e3d_bdab_612a8b4c4b73.slice/crio-e4579e7fafb45924700fae1f75cbc53404865f5ae18131307ebc59572136e50d WatchSource:0}: Error finding container e4579e7fafb45924700fae1f75cbc53404865f5ae18131307ebc59572136e50d: Status 404 returned error can't find the container with id e4579e7fafb45924700fae1f75cbc53404865f5ae18131307ebc59572136e50d Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.743480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" event={"ID":"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73","Type":"ContainerStarted","Data":"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1"} Jan 29 11:28:46 crc kubenswrapper[4766]: I0129 11:28:46.743524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" event={"ID":"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73","Type":"ContainerStarted","Data":"e4579e7fafb45924700fae1f75cbc53404865f5ae18131307ebc59572136e50d"} Jan 29 11:28:47 crc kubenswrapper[4766]: I0129 11:28:47.748194 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:47 crc kubenswrapper[4766]: I0129 11:28:47.753734 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:28:47 crc kubenswrapper[4766]: I0129 11:28:47.768481 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" podStartSLOduration=5.768461187 podStartE2EDuration="5.768461187s" podCreationTimestamp="2026-01-29 11:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:28:46.763774686 +0000 UTC m=+463.876167697" watchObservedRunningTime="2026-01-29 11:28:47.768461187 +0000 UTC m=+464.880854198" Jan 29 11:28:48 crc kubenswrapper[4766]: I0129 11:28:48.682339 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:28:48 crc kubenswrapper[4766]: I0129 11:28:48.682653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6mp9b" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="registry-server" containerID="cri-o://3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f" gracePeriod=2 Jan 29 11:28:48 crc kubenswrapper[4766]: I0129 11:28:48.885060 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:28:48 crc kubenswrapper[4766]: I0129 11:28:48.885355 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bd99b" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="registry-server" containerID="cri-o://82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa" gracePeriod=2 Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.115709 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.223091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content\") pod \"74f9c23f-66e4-4082-b80f-f4966819b6d7\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.223202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities\") pod \"74f9c23f-66e4-4082-b80f-f4966819b6d7\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.223333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4v74\" (UniqueName: \"kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74\") pod \"74f9c23f-66e4-4082-b80f-f4966819b6d7\" (UID: \"74f9c23f-66e4-4082-b80f-f4966819b6d7\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.224708 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities" (OuterVolumeSpecName: "utilities") pod "74f9c23f-66e4-4082-b80f-f4966819b6d7" (UID: "74f9c23f-66e4-4082-b80f-f4966819b6d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.232636 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74" (OuterVolumeSpecName: "kube-api-access-k4v74") pod "74f9c23f-66e4-4082-b80f-f4966819b6d7" (UID: "74f9c23f-66e4-4082-b80f-f4966819b6d7"). InnerVolumeSpecName "kube-api-access-k4v74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.280467 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74f9c23f-66e4-4082-b80f-f4966819b6d7" (UID: "74f9c23f-66e4-4082-b80f-f4966819b6d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.327085 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4v74\" (UniqueName: \"kubernetes.io/projected/74f9c23f-66e4-4082-b80f-f4966819b6d7-kube-api-access-k4v74\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.327189 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.327203 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74f9c23f-66e4-4082-b80f-f4966819b6d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.345564 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.427986 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content\") pod \"ad6c1b2d-116e-4979-9676-c27cb40ee318\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.428125 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities\") pod \"ad6c1b2d-116e-4979-9676-c27cb40ee318\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.428756 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tw6z\" (UniqueName: \"kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z\") pod \"ad6c1b2d-116e-4979-9676-c27cb40ee318\" (UID: \"ad6c1b2d-116e-4979-9676-c27cb40ee318\") " Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.429389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities" (OuterVolumeSpecName: "utilities") pod "ad6c1b2d-116e-4979-9676-c27cb40ee318" (UID: "ad6c1b2d-116e-4979-9676-c27cb40ee318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.431902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z" (OuterVolumeSpecName: "kube-api-access-8tw6z") pod "ad6c1b2d-116e-4979-9676-c27cb40ee318" (UID: "ad6c1b2d-116e-4979-9676-c27cb40ee318"). InnerVolumeSpecName "kube-api-access-8tw6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.475868 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad6c1b2d-116e-4979-9676-c27cb40ee318" (UID: "ad6c1b2d-116e-4979-9676-c27cb40ee318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.530861 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.530902 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tw6z\" (UniqueName: \"kubernetes.io/projected/ad6c1b2d-116e-4979-9676-c27cb40ee318-kube-api-access-8tw6z\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.530913 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6c1b2d-116e-4979-9676-c27cb40ee318-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.761464 4766 generic.go:334] "Generic (PLEG): container finished" podID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerID="82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa" exitCode=0 Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.761532 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd99b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.761535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerDied","Data":"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa"} Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.762237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd99b" event={"ID":"ad6c1b2d-116e-4979-9676-c27cb40ee318","Type":"ContainerDied","Data":"b4e48adb11652401ec89970243065926b7af915e6382e52cb18d5267b3466291"} Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.762264 4766 scope.go:117] "RemoveContainer" containerID="82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.765322 4766 generic.go:334] "Generic (PLEG): container finished" podID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerID="3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f" exitCode=0 Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.765993 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mp9b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.771579 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerDied","Data":"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f"} Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.771665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mp9b" event={"ID":"74f9c23f-66e4-4082-b80f-f4966819b6d7","Type":"ContainerDied","Data":"db2e0d0a2dd51bd163a94cff98f2e280c03b5086ef5f7b70bb62f0e4261d31f9"} Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.787915 4766 scope.go:117] "RemoveContainer" containerID="02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.804773 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.808082 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bd99b"] Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.826678 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.830430 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6mp9b"] Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.840659 4766 scope.go:117] "RemoveContainer" containerID="740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.861563 4766 scope.go:117] "RemoveContainer" containerID="82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.861953 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa\": container with ID starting with 82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa not found: ID does not exist" containerID="82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.861986 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa"} err="failed to get container status \"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa\": rpc error: code = NotFound desc = could not find container \"82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa\": container with ID starting with 82707fe664c151d0458276e934b21a4258aee603312c376e36730df4a67eadaa not found: ID does not exist" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.862009 4766 scope.go:117] "RemoveContainer" containerID="02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.862311 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e\": container with ID starting with 02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e not found: ID does not exist" containerID="02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.862354 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e"} err="failed to get container status \"02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e\": rpc error: code = NotFound desc = could not find container \"02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e\": container with ID starting with 02a80e550839da309f6e873fefe8dbd102823b74d77d406e03964a6e6c84911e not found: ID does not exist" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.862384 4766 scope.go:117] "RemoveContainer" containerID="740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.862705 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b\": container with ID starting with 740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b not found: ID does not exist" containerID="740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.862735 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b"} err="failed to get container status \"740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b\": rpc error: code = NotFound desc = could not find container \"740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b\": container with ID starting with 740a7ad070c1fb948a4011840b005325e121074f9a6ab5f512f7899ec280bc4b not found: ID does not exist" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.862754 4766 scope.go:117] "RemoveContainer" containerID="3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.884741 4766 scope.go:117] "RemoveContainer" containerID="e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.901508 4766 scope.go:117] "RemoveContainer" containerID="79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.919131 4766 scope.go:117] "RemoveContainer" containerID="3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.919792 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f\": container with ID starting with 3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f not found: ID does not exist" containerID="3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.919841 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f"} err="failed to get container status \"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f\": rpc error: code = NotFound desc = could not find container \"3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f\": container with ID starting with 3f56d440b7fb273f0534dd3ff1b25ac7c059dedeac9b33c24125192fcfc1ed0f not found: ID does not exist" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.919874 4766 scope.go:117] "RemoveContainer" containerID="e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.920355 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f\": container with ID starting with e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f not found: ID does not exist" containerID="e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.920398 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f"} err="failed to get container status \"e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f\": rpc error: code = NotFound desc = could not find container \"e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f\": container with ID starting with e3441512cb68c0342c4b1d74a268765c9b4c7a8ca3ca96e71a43944cc73f834f not found: ID does not exist" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.920443 4766 scope.go:117] "RemoveContainer" containerID="79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf" Jan 29 11:28:49 crc kubenswrapper[4766]: E0129 11:28:49.920896 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf\": container with ID starting with 79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf not found: ID does not exist" containerID="79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf" Jan 29 11:28:49 crc kubenswrapper[4766]: I0129 11:28:49.920933 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf"} err="failed to get container status \"79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf\": rpc error: code = NotFound desc = could not find container \"79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf\": container with ID starting with 79996c31d26696e3569d8e05329a9d675802127eb47978793f0dd89b52cd60bf not found: ID does not exist" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.239596 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" path="/var/lib/kubelet/pods/74f9c23f-66e4-4082-b80f-f4966819b6d7/volumes" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.240742 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" path="/var/lib/kubelet/pods/ad6c1b2d-116e-4979-9676-c27cb40ee318/volumes" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.282257 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.282493 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mpsxm" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="registry-server" containerID="cri-o://9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243" gracePeriod=2 Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.760871 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.788907 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerID="9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243" exitCode=0 Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.788948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerDied","Data":"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243"} Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.788976 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpsxm" event={"ID":"aa1d4f87-07d9-4499-a955-15f90a40a4ad","Type":"ContainerDied","Data":"b73fe5ef91a070ab5c332eab9844518154ff9d2a6d1a4fbd0e7ab1ab71ad7ac7"} Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.789000 4766 scope.go:117] "RemoveContainer" containerID="9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.789117 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpsxm" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.806293 4766 scope.go:117] "RemoveContainer" containerID="a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.832688 4766 scope.go:117] "RemoveContainer" containerID="5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.849639 4766 scope.go:117] "RemoveContainer" containerID="9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243" Jan 29 11:28:51 crc kubenswrapper[4766]: E0129 11:28:51.850289 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243\": container with ID starting with 9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243 not found: ID does not exist" containerID="9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.850345 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243"} err="failed to get container status \"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243\": rpc error: code = NotFound desc = could not find container \"9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243\": container with ID starting with 9bcaf801ed9bb1e11c43ea0c0f9fb31f52fe070c28cee6842c7f93488f044243 not found: ID does not exist" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.850380 4766 scope.go:117] "RemoveContainer" containerID="a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91" Jan 29 11:28:51 crc kubenswrapper[4766]: E0129 11:28:51.850777 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91\": container with ID starting with a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91 not found: ID does not exist" containerID="a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.850811 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91"} err="failed to get container status \"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91\": rpc error: code = NotFound desc = could not find container \"a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91\": container with ID starting with a1c190d20916e90ee7b9ffeb8fa2dcf165c901660ce1f3981322e259dce88f91 not found: ID does not exist" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.850841 4766 scope.go:117] "RemoveContainer" containerID="5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17" Jan 29 11:28:51 crc kubenswrapper[4766]: E0129 11:28:51.851174 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17\": container with ID starting with 5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17 not found: ID does not exist" containerID="5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.851206 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17"} err="failed to get container status \"5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17\": rpc error: code = NotFound desc = could not find container \"5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17\": container with ID starting with 5e72ed849442e9fa0bb78013d1e491085467d00535613fb618a87c1d1ba73a17 not found: ID does not exist" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.857710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities\") pod \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.857770 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content\") pod \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.857878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqq9c\" (UniqueName: \"kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c\") pod \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\" (UID: \"aa1d4f87-07d9-4499-a955-15f90a40a4ad\") " Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.859085 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities" (OuterVolumeSpecName: "utilities") pod "aa1d4f87-07d9-4499-a955-15f90a40a4ad" (UID: "aa1d4f87-07d9-4499-a955-15f90a40a4ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.865965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c" (OuterVolumeSpecName: "kube-api-access-sqq9c") pod "aa1d4f87-07d9-4499-a955-15f90a40a4ad" (UID: "aa1d4f87-07d9-4499-a955-15f90a40a4ad"). InnerVolumeSpecName "kube-api-access-sqq9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.959765 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqq9c\" (UniqueName: \"kubernetes.io/projected/aa1d4f87-07d9-4499-a955-15f90a40a4ad-kube-api-access-sqq9c\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.959881 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:51 crc kubenswrapper[4766]: I0129 11:28:51.982613 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa1d4f87-07d9-4499-a955-15f90a40a4ad" (UID: "aa1d4f87-07d9-4499-a955-15f90a40a4ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:28:52 crc kubenswrapper[4766]: I0129 11:28:52.061023 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa1d4f87-07d9-4499-a955-15f90a40a4ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:28:52 crc kubenswrapper[4766]: I0129 11:28:52.122352 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:28:52 crc kubenswrapper[4766]: I0129 11:28:52.130101 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mpsxm"] Jan 29 11:28:53 crc kubenswrapper[4766]: I0129 11:28:53.229958 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" path="/var/lib/kubelet/pods/aa1d4f87-07d9-4499-a955-15f90a40a4ad/volumes" Jan 29 11:28:59 crc kubenswrapper[4766]: I0129 11:28:59.621375 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:28:59 crc kubenswrapper[4766]: I0129 11:28:59.621907 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" podUID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" containerName="controller-manager" containerID="cri-o://71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1" gracePeriod=30 Jan 29 11:28:59 crc kubenswrapper[4766]: I0129 11:28:59.710711 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:28:59 crc kubenswrapper[4766]: I0129 11:28:59.710914 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" podUID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" containerName="route-controller-manager" containerID="cri-o://e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98" gracePeriod=30 Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.727843 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.744813 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.759843 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760098 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760118 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760137 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760147 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760158 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760165 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760175 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760182 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760191 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" containerName="route-controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760210 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" containerName="route-controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760218 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760226 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760238 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760244 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760256 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" containerName="controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760266 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" containerName="controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760278 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760285 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="extract-utilities" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760296 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760304 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.760316 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760324 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="extract-content" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760483 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f9c23f-66e4-4082-b80f-f4966819b6d7" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760499 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" containerName="route-controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760514 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" containerName="controller-manager" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760524 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6c1b2d-116e-4979-9676-c27cb40ee318" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760533 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1d4f87-07d9-4499-a955-15f90a40a4ad" containerName="registry-server" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.760884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.802922 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.841689 4766 generic.go:334] "Generic (PLEG): container finished" podID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" containerID="e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98" exitCode=0 Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.841752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" event={"ID":"a83e7e13-cb6b-4360-a57e-5fcf24f14286","Type":"ContainerDied","Data":"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98"} Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.841778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" event={"ID":"a83e7e13-cb6b-4360-a57e-5fcf24f14286","Type":"ContainerDied","Data":"7b980399c001815adbe8b6513e0123139388ce412a070d920124c9251d792f2c"} Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.841796 4766 scope.go:117] "RemoveContainer" containerID="e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.841917 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.847252 4766 generic.go:334] "Generic (PLEG): container finished" podID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" containerID="71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1" exitCode=0 Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.847289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" event={"ID":"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73","Type":"ContainerDied","Data":"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1"} Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.847312 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" event={"ID":"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73","Type":"ContainerDied","Data":"e4579e7fafb45924700fae1f75cbc53404865f5ae18131307ebc59572136e50d"} Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.847372 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.862440 4766 scope.go:117] "RemoveContainer" containerID="e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.863025 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98\": container with ID starting with e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98 not found: ID does not exist" containerID="e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.863146 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98"} err="failed to get container status \"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98\": rpc error: code = NotFound desc = could not find container \"e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98\": container with ID starting with e67120d98abae405ad6bc9f877a5ce6a188f5218aea1c002323303d169269e98 not found: ID does not exist" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.863221 4766 scope.go:117] "RemoveContainer" containerID="71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.879555 4766 scope.go:117] "RemoveContainer" containerID="71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1" Jan 29 11:29:00 crc kubenswrapper[4766]: E0129 11:29:00.880071 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1\": container with ID starting with 71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1 not found: ID does not exist" containerID="71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.880152 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1"} err="failed to get container status \"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1\": rpc error: code = NotFound desc = could not find container \"71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1\": container with ID starting with 71c41a02de4c05f9485033e5b8ec69b2fb4838ae3562345f039ebcfad7525ed1 not found: ID does not exist" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.885734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t59k\" (UniqueName: \"kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k\") pod \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.885843 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert\") pod \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.885927 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca\") pod \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886027 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsg8z\" (UniqueName: \"kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z\") pod \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert\") pod \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886178 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config\") pod \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886264 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config\") pod \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\" (UID: \"a83e7e13-cb6b-4360-a57e-5fcf24f14286\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886341 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles\") pod \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca\") pod \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\" (UID: \"b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73\") " Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886763 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kd6\" (UniqueName: \"kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.886935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.887020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.887108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" (UID: "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.887643 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config" (OuterVolumeSpecName: "config") pod "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" (UID: "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.887975 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" (UID: "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.888165 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca" (OuterVolumeSpecName: "client-ca") pod "a83e7e13-cb6b-4360-a57e-5fcf24f14286" (UID: "a83e7e13-cb6b-4360-a57e-5fcf24f14286"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.888292 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config" (OuterVolumeSpecName: "config") pod "a83e7e13-cb6b-4360-a57e-5fcf24f14286" (UID: "a83e7e13-cb6b-4360-a57e-5fcf24f14286"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.894037 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a83e7e13-cb6b-4360-a57e-5fcf24f14286" (UID: "a83e7e13-cb6b-4360-a57e-5fcf24f14286"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.894894 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k" (OuterVolumeSpecName: "kube-api-access-8t59k") pod "a83e7e13-cb6b-4360-a57e-5fcf24f14286" (UID: "a83e7e13-cb6b-4360-a57e-5fcf24f14286"). InnerVolumeSpecName "kube-api-access-8t59k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.895246 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" (UID: "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.901603 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z" (OuterVolumeSpecName: "kube-api-access-wsg8z") pod "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" (UID: "b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73"). InnerVolumeSpecName "kube-api-access-wsg8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kd6\" (UniqueName: \"kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988783 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988853 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83e7e13-cb6b-4360-a57e-5fcf24f14286-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988873 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsg8z\" (UniqueName: \"kubernetes.io/projected/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-kube-api-access-wsg8z\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988883 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988893 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988903 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988943 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988958 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t59k\" (UniqueName: \"kubernetes.io/projected/a83e7e13-cb6b-4360-a57e-5fcf24f14286-kube-api-access-8t59k\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988967 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.988978 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83e7e13-cb6b-4360-a57e-5fcf24f14286-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.991014 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.991120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.992312 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:00 crc kubenswrapper[4766]: I0129 11:29:00.993371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.014914 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kd6\" (UniqueName: \"kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6\") pod \"controller-manager-648659b994-87x24\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.083028 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.169715 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.173810 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58599c79bf-pkqzn"] Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.181629 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.188903 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mn9gz"] Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.242223 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83e7e13-cb6b-4360-a57e-5fcf24f14286" path="/var/lib/kubelet/pods/a83e7e13-cb6b-4360-a57e-5fcf24f14286/volumes" Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.242992 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73" path="/var/lib/kubelet/pods/b2fbe4ba-2517-4e3d-bdab-612a8b4c4b73/volumes" Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.563173 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:01 crc kubenswrapper[4766]: I0129 11:29:01.854886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648659b994-87x24" event={"ID":"601419f0-d269-44c0-9a74-17163fc0425b","Type":"ContainerStarted","Data":"5ffbd0384e4abb24f257117eb7f22aa41e98a6d44133185bd7c0c0c7a0044b49"} Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.787671 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.788886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.792966 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.793180 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.793303 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.793546 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.793694 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.793888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.807977 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.865061 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648659b994-87x24" event={"ID":"601419f0-d269-44c0-9a74-17163fc0425b","Type":"ContainerStarted","Data":"7189c0df1f72fdecb0676f43cf505477ba8cab1b9f3ff0f8e3f63f52e02e79f6"} Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.865279 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.870039 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.909608 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-648659b994-87x24" podStartSLOduration=3.909577203 podStartE2EDuration="3.909577203s" podCreationTimestamp="2026-01-29 11:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:29:02.8854761 +0000 UTC m=+479.997869121" watchObservedRunningTime="2026-01-29 11:29:02.909577203 +0000 UTC m=+480.021970214" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.912071 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bcv2\" (UniqueName: \"kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.912132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.912218 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:02 crc kubenswrapper[4766]: I0129 11:29:02.912257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.013926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.014042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.014077 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.014117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bcv2\" (UniqueName: \"kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.015821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.016097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.024684 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.035081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bcv2\" (UniqueName: \"kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2\") pod \"route-controller-manager-6cddb56ddd-pfcbq\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.116798 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.516353 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:03 crc kubenswrapper[4766]: W0129 11:29:03.523756 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod291db876_7914_43f0_bb3f_c3ce3d78801c.slice/crio-d4e9dff4084918f14c3010206c5ce8d550c225a5e1341603cdab3f79b78f5a5d WatchSource:0}: Error finding container d4e9dff4084918f14c3010206c5ce8d550c225a5e1341603cdab3f79b78f5a5d: Status 404 returned error can't find the container with id d4e9dff4084918f14c3010206c5ce8d550c225a5e1341603cdab3f79b78f5a5d Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.874922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" event={"ID":"291db876-7914-43f0-bb3f-c3ce3d78801c","Type":"ContainerStarted","Data":"01cb54645d7985736116cb778ec180aa25b13bc4caa5088bb88029b3595a98bf"} Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.877323 4766 patch_prober.go:28] interesting pod/route-controller-manager-6cddb56ddd-pfcbq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.877365 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.877702 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.877974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" event={"ID":"291db876-7914-43f0-bb3f-c3ce3d78801c","Type":"ContainerStarted","Data":"d4e9dff4084918f14c3010206c5ce8d550c225a5e1341603cdab3f79b78f5a5d"} Jan 29 11:29:03 crc kubenswrapper[4766]: I0129 11:29:03.895357 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" podStartSLOduration=4.895338672 podStartE2EDuration="4.895338672s" podCreationTimestamp="2026-01-29 11:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:29:03.893108117 +0000 UTC m=+481.005501138" watchObservedRunningTime="2026-01-29 11:29:03.895338672 +0000 UTC m=+481.007731703" Jan 29 11:29:04 crc kubenswrapper[4766]: I0129 11:29:04.888666 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.362483 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.363149 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.363203 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.363869 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.363940 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f" gracePeriod=600 Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.946221 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f" exitCode=0 Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.946447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f"} Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.946566 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210"} Jan 29 11:29:16 crc kubenswrapper[4766]: I0129 11:29:16.946587 4766 scope.go:117] "RemoveContainer" containerID="9febd4264914d9c116a6140e5830ebf08ab5d05c7d1121fd9da14550c928c576" Jan 29 11:29:19 crc kubenswrapper[4766]: I0129 11:29:19.599887 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:19 crc kubenswrapper[4766]: I0129 11:29:19.600840 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-648659b994-87x24" podUID="601419f0-d269-44c0-9a74-17163fc0425b" containerName="controller-manager" containerID="cri-o://7189c0df1f72fdecb0676f43cf505477ba8cab1b9f3ff0f8e3f63f52e02e79f6" gracePeriod=30 Jan 29 11:29:19 crc kubenswrapper[4766]: I0129 11:29:19.970311 4766 generic.go:334] "Generic (PLEG): container finished" podID="601419f0-d269-44c0-9a74-17163fc0425b" containerID="7189c0df1f72fdecb0676f43cf505477ba8cab1b9f3ff0f8e3f63f52e02e79f6" exitCode=0 Jan 29 11:29:19 crc kubenswrapper[4766]: I0129 11:29:19.970376 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648659b994-87x24" event={"ID":"601419f0-d269-44c0-9a74-17163fc0425b","Type":"ContainerDied","Data":"7189c0df1f72fdecb0676f43cf505477ba8cab1b9f3ff0f8e3f63f52e02e79f6"} Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.199464 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.350700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles\") pod \"601419f0-d269-44c0-9a74-17163fc0425b\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.350827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert\") pod \"601419f0-d269-44c0-9a74-17163fc0425b\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.350891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config\") pod \"601419f0-d269-44c0-9a74-17163fc0425b\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.350924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca\") pod \"601419f0-d269-44c0-9a74-17163fc0425b\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.350982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5kd6\" (UniqueName: \"kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6\") pod \"601419f0-d269-44c0-9a74-17163fc0425b\" (UID: \"601419f0-d269-44c0-9a74-17163fc0425b\") " Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.351912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca" (OuterVolumeSpecName: "client-ca") pod "601419f0-d269-44c0-9a74-17163fc0425b" (UID: "601419f0-d269-44c0-9a74-17163fc0425b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.352022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config" (OuterVolumeSpecName: "config") pod "601419f0-d269-44c0-9a74-17163fc0425b" (UID: "601419f0-d269-44c0-9a74-17163fc0425b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.352043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "601419f0-d269-44c0-9a74-17163fc0425b" (UID: "601419f0-d269-44c0-9a74-17163fc0425b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.357833 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6" (OuterVolumeSpecName: "kube-api-access-t5kd6") pod "601419f0-d269-44c0-9a74-17163fc0425b" (UID: "601419f0-d269-44c0-9a74-17163fc0425b"). InnerVolumeSpecName "kube-api-access-t5kd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.358120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "601419f0-d269-44c0-9a74-17163fc0425b" (UID: "601419f0-d269-44c0-9a74-17163fc0425b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.453578 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5kd6\" (UniqueName: \"kubernetes.io/projected/601419f0-d269-44c0-9a74-17163fc0425b-kube-api-access-t5kd6\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.453663 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.453739 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/601419f0-d269-44c0-9a74-17163fc0425b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.453754 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.453955 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/601419f0-d269-44c0-9a74-17163fc0425b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.811695 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:20 crc kubenswrapper[4766]: E0129 11:29:20.811903 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601419f0-d269-44c0-9a74-17163fc0425b" containerName="controller-manager" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.811915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="601419f0-d269-44c0-9a74-17163fc0425b" containerName="controller-manager" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.812016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="601419f0-d269-44c0-9a74-17163fc0425b" containerName="controller-manager" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.812368 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.819830 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.959932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.960020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.960044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssbg\" (UniqueName: \"kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.960067 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.960149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.977089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648659b994-87x24" event={"ID":"601419f0-d269-44c0-9a74-17163fc0425b","Type":"ContainerDied","Data":"5ffbd0384e4abb24f257117eb7f22aa41e98a6d44133185bd7c0c0c7a0044b49"} Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.977144 4766 scope.go:117] "RemoveContainer" containerID="7189c0df1f72fdecb0676f43cf505477ba8cab1b9f3ff0f8e3f63f52e02e79f6" Jan 29 11:29:20 crc kubenswrapper[4766]: I0129 11:29:20.977245 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648659b994-87x24" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.007071 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.010935 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-648659b994-87x24"] Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.061225 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.061263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssbg\" (UniqueName: \"kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.061284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.061318 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.061344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.062430 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.063052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.063200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.066076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.081339 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssbg\" (UniqueName: \"kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg\") pod \"controller-manager-54ffb9c8c5-mk6kb\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.129773 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.236298 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601419f0-d269-44c0-9a74-17163fc0425b" path="/var/lib/kubelet/pods/601419f0-d269-44c0-9a74-17163fc0425b/volumes" Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.518953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.984147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" event={"ID":"7af7cf81-e9b2-40d1-bd01-af48a4fa1242","Type":"ContainerStarted","Data":"3a48b4522366cfbcf50c815d457ea6a0780045c03e7fd090d00f3a92b9f7ff76"} Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.984247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" event={"ID":"7af7cf81-e9b2-40d1-bd01-af48a4fa1242","Type":"ContainerStarted","Data":"0f3e7502686439b3ed4402b8645ed46671475abccac055a2c2a89c54bf834d31"} Jan 29 11:29:21 crc kubenswrapper[4766]: I0129 11:29:21.984420 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:22 crc kubenswrapper[4766]: I0129 11:29:22.008435 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:22 crc kubenswrapper[4766]: I0129 11:29:22.017678 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" podStartSLOduration=3.017657443 podStartE2EDuration="3.017657443s" podCreationTimestamp="2026-01-29 11:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:29:22.007372493 +0000 UTC m=+499.119765504" watchObservedRunningTime="2026-01-29 11:29:22.017657443 +0000 UTC m=+499.130050444" Jan 29 11:29:39 crc kubenswrapper[4766]: I0129 11:29:39.623863 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:39 crc kubenswrapper[4766]: I0129 11:29:39.624554 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" podUID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" containerName="controller-manager" containerID="cri-o://3a48b4522366cfbcf50c815d457ea6a0780045c03e7fd090d00f3a92b9f7ff76" gracePeriod=30 Jan 29 11:29:39 crc kubenswrapper[4766]: I0129 11:29:39.715362 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:39 crc kubenswrapper[4766]: I0129 11:29:39.715626 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerName="route-controller-manager" containerID="cri-o://01cb54645d7985736116cb778ec180aa25b13bc4caa5088bb88029b3595a98bf" gracePeriod=30 Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.076400 4766 generic.go:334] "Generic (PLEG): container finished" podID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" containerID="3a48b4522366cfbcf50c815d457ea6a0780045c03e7fd090d00f3a92b9f7ff76" exitCode=0 Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.076472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" event={"ID":"7af7cf81-e9b2-40d1-bd01-af48a4fa1242","Type":"ContainerDied","Data":"3a48b4522366cfbcf50c815d457ea6a0780045c03e7fd090d00f3a92b9f7ff76"} Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.078302 4766 generic.go:334] "Generic (PLEG): container finished" podID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerID="01cb54645d7985736116cb778ec180aa25b13bc4caa5088bb88029b3595a98bf" exitCode=0 Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.078339 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" event={"ID":"291db876-7914-43f0-bb3f-c3ce3d78801c","Type":"ContainerDied","Data":"01cb54645d7985736116cb778ec180aa25b13bc4caa5088bb88029b3595a98bf"} Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.762858 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.794051 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:29:40 crc kubenswrapper[4766]: E0129 11:29:40.794307 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerName="route-controller-manager" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.794327 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerName="route-controller-manager" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.794457 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" containerName="route-controller-manager" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.796687 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.807856 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.829713 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca\") pod \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911531 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config\") pod \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911555 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jssbg\" (UniqueName: \"kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg\") pod \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911600 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert\") pod \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911625 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bcv2\" (UniqueName: \"kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2\") pod \"291db876-7914-43f0-bb3f-c3ce3d78801c\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911682 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config\") pod \"291db876-7914-43f0-bb3f-c3ce3d78801c\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca\") pod \"291db876-7914-43f0-bb3f-c3ce3d78801c\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert\") pod \"291db876-7914-43f0-bb3f-c3ce3d78801c\" (UID: \"291db876-7914-43f0-bb3f-c3ce3d78801c\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles\") pod \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\" (UID: \"7af7cf81-e9b2-40d1-bd01-af48a4fa1242\") " Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.911952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912091 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwjgw\" (UniqueName: \"kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912222 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca" (OuterVolumeSpecName: "client-ca") pod "7af7cf81-e9b2-40d1-bd01-af48a4fa1242" (UID: "7af7cf81-e9b2-40d1-bd01-af48a4fa1242"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912735 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config" (OuterVolumeSpecName: "config") pod "291db876-7914-43f0-bb3f-c3ce3d78801c" (UID: "291db876-7914-43f0-bb3f-c3ce3d78801c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.912761 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca" (OuterVolumeSpecName: "client-ca") pod "291db876-7914-43f0-bb3f-c3ce3d78801c" (UID: "291db876-7914-43f0-bb3f-c3ce3d78801c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.913081 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config" (OuterVolumeSpecName: "config") pod "7af7cf81-e9b2-40d1-bd01-af48a4fa1242" (UID: "7af7cf81-e9b2-40d1-bd01-af48a4fa1242"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.913887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7af7cf81-e9b2-40d1-bd01-af48a4fa1242" (UID: "7af7cf81-e9b2-40d1-bd01-af48a4fa1242"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.917490 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2" (OuterVolumeSpecName: "kube-api-access-7bcv2") pod "291db876-7914-43f0-bb3f-c3ce3d78801c" (UID: "291db876-7914-43f0-bb3f-c3ce3d78801c"). InnerVolumeSpecName "kube-api-access-7bcv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.918530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "291db876-7914-43f0-bb3f-c3ce3d78801c" (UID: "291db876-7914-43f0-bb3f-c3ce3d78801c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.926223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7af7cf81-e9b2-40d1-bd01-af48a4fa1242" (UID: "7af7cf81-e9b2-40d1-bd01-af48a4fa1242"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:40 crc kubenswrapper[4766]: I0129 11:29:40.926278 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg" (OuterVolumeSpecName: "kube-api-access-jssbg") pod "7af7cf81-e9b2-40d1-bd01-af48a4fa1242" (UID: "7af7cf81-e9b2-40d1-bd01-af48a4fa1242"). InnerVolumeSpecName "kube-api-access-jssbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013619 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013716 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwjgw\" (UniqueName: \"kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013852 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013864 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jssbg\" (UniqueName: \"kubernetes.io/projected/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-kube-api-access-jssbg\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013874 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013884 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bcv2\" (UniqueName: \"kubernetes.io/projected/291db876-7914-43f0-bb3f-c3ce3d78801c-kube-api-access-7bcv2\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013894 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013902 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291db876-7914-43f0-bb3f-c3ce3d78801c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013910 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291db876-7914-43f0-bb3f-c3ce3d78801c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013919 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.013928 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7af7cf81-e9b2-40d1-bd01-af48a4fa1242-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.014804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.017230 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.018202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.031153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwjgw\" (UniqueName: \"kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw\") pod \"route-controller-manager-8d678cf5c-k2wlx\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.084841 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" event={"ID":"291db876-7914-43f0-bb3f-c3ce3d78801c","Type":"ContainerDied","Data":"d4e9dff4084918f14c3010206c5ce8d550c225a5e1341603cdab3f79b78f5a5d"} Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.084889 4766 scope.go:117] "RemoveContainer" containerID="01cb54645d7985736116cb778ec180aa25b13bc4caa5088bb88029b3595a98bf" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.084926 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.088203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" event={"ID":"7af7cf81-e9b2-40d1-bd01-af48a4fa1242","Type":"ContainerDied","Data":"0f3e7502686439b3ed4402b8645ed46671475abccac055a2c2a89c54bf834d31"} Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.088248 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.103604 4766 scope.go:117] "RemoveContainer" containerID="3a48b4522366cfbcf50c815d457ea6a0780045c03e7fd090d00f3a92b9f7ff76" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.117234 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.122989 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cddb56ddd-pfcbq"] Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.128119 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.131678 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54ffb9c8c5-mk6kb"] Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.144975 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.235925 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291db876-7914-43f0-bb3f-c3ce3d78801c" path="/var/lib/kubelet/pods/291db876-7914-43f0-bb3f-c3ce3d78801c/volumes" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.237085 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" path="/var/lib/kubelet/pods/7af7cf81-e9b2-40d1-bd01-af48a4fa1242/volumes" Jan 29 11:29:41 crc kubenswrapper[4766]: I0129 11:29:41.538107 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:29:41 crc kubenswrapper[4766]: W0129 11:29:41.555093 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd7a1836_1221_4f95_a0dd_ab008ba0196b.slice/crio-734b256106529ab9b92069e87480d6eabda38b142844f4e6dcffba1aa253dd60 WatchSource:0}: Error finding container 734b256106529ab9b92069e87480d6eabda38b142844f4e6dcffba1aa253dd60: Status 404 returned error can't find the container with id 734b256106529ab9b92069e87480d6eabda38b142844f4e6dcffba1aa253dd60 Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.095930 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" event={"ID":"fd7a1836-1221-4f95-a0dd-ab008ba0196b","Type":"ContainerStarted","Data":"97ba67fdd9e71014eba2cf6ff36f2df2a9d8f66f6da1ed5b7a892fbc9beea268"} Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.096185 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" event={"ID":"fd7a1836-1221-4f95-a0dd-ab008ba0196b","Type":"ContainerStarted","Data":"734b256106529ab9b92069e87480d6eabda38b142844f4e6dcffba1aa253dd60"} Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.096436 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.111994 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" podStartSLOduration=3.111972957 podStartE2EDuration="3.111972957s" podCreationTimestamp="2026-01-29 11:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:29:42.111916916 +0000 UTC m=+519.224309927" watchObservedRunningTime="2026-01-29 11:29:42.111972957 +0000 UTC m=+519.224365968" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.336679 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.822634 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:29:42 crc kubenswrapper[4766]: E0129 11:29:42.823228 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" containerName="controller-manager" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.823244 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" containerName="controller-manager" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.823356 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af7cf81-e9b2-40d1-bd01-af48a4fa1242" containerName="controller-manager" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.823807 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.826519 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.826654 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.826811 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.826818 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.827270 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.829521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.834263 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.837436 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.961099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.961404 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.961532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.961652 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52g6m\" (UniqueName: \"kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:42 crc kubenswrapper[4766]: I0129 11:29:42.961793 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.063044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.063122 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52g6m\" (UniqueName: \"kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.063165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.063210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.063234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.064634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.064914 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.065840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.084549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.088546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52g6m\" (UniqueName: \"kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m\") pod \"controller-manager-6dc895bfcf-bqtnb\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.143640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:43 crc kubenswrapper[4766]: I0129 11:29:43.543945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:29:44 crc kubenswrapper[4766]: I0129 11:29:44.112163 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" event={"ID":"076f9c64-d14f-4657-a5a4-e2df4808e02c","Type":"ContainerStarted","Data":"0b3bd21525fbf28661712f32e2be85c2701eefba174eafdfbca080b03ef321a7"} Jan 29 11:29:45 crc kubenswrapper[4766]: I0129 11:29:45.118197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" event={"ID":"076f9c64-d14f-4657-a5a4-e2df4808e02c","Type":"ContainerStarted","Data":"327ccd9dcc41d75087df0c4031b942f012cd8748cc3a8356d2a269639d7de6c3"} Jan 29 11:29:45 crc kubenswrapper[4766]: I0129 11:29:45.118546 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:45 crc kubenswrapper[4766]: I0129 11:29:45.124133 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:29:45 crc kubenswrapper[4766]: I0129 11:29:45.137358 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" podStartSLOduration=6.137322224 podStartE2EDuration="6.137322224s" podCreationTimestamp="2026-01-29 11:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:29:45.134440099 +0000 UTC m=+522.246833110" watchObservedRunningTime="2026-01-29 11:29:45.137322224 +0000 UTC m=+522.249715235" Jan 29 11:29:59 crc kubenswrapper[4766]: I0129 11:29:59.608475 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:29:59 crc kubenswrapper[4766]: I0129 11:29:59.609347 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" podUID="076f9c64-d14f-4657-a5a4-e2df4808e02c" containerName="controller-manager" containerID="cri-o://327ccd9dcc41d75087df0c4031b942f012cd8748cc3a8356d2a269639d7de6c3" gracePeriod=30 Jan 29 11:29:59 crc kubenswrapper[4766]: I0129 11:29:59.622462 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:29:59 crc kubenswrapper[4766]: I0129 11:29:59.622657 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" podUID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" containerName="route-controller-manager" containerID="cri-o://97ba67fdd9e71014eba2cf6ff36f2df2a9d8f66f6da1ed5b7a892fbc9beea268" gracePeriod=30 Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.174359 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z"] Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.174957 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.176745 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.176908 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.183426 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z"] Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.199304 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" containerID="97ba67fdd9e71014eba2cf6ff36f2df2a9d8f66f6da1ed5b7a892fbc9beea268" exitCode=0 Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.199431 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" event={"ID":"fd7a1836-1221-4f95-a0dd-ab008ba0196b","Type":"ContainerDied","Data":"97ba67fdd9e71014eba2cf6ff36f2df2a9d8f66f6da1ed5b7a892fbc9beea268"} Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.201152 4766 generic.go:334] "Generic (PLEG): container finished" podID="076f9c64-d14f-4657-a5a4-e2df4808e02c" containerID="327ccd9dcc41d75087df0c4031b942f012cd8748cc3a8356d2a269639d7de6c3" exitCode=0 Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.201195 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" event={"ID":"076f9c64-d14f-4657-a5a4-e2df4808e02c","Type":"ContainerDied","Data":"327ccd9dcc41d75087df0c4031b942f012cd8748cc3a8356d2a269639d7de6c3"} Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.282559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.282967 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knd6h\" (UniqueName: \"kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.283046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.384357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.384447 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.384474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knd6h\" (UniqueName: \"kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.385539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.399279 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.410221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knd6h\" (UniqueName: \"kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h\") pod \"collect-profiles-29494770-m9w9z\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.494705 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.753113 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.792143 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:00 crc kubenswrapper[4766]: E0129 11:30:00.796474 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" containerName="route-controller-manager" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.796509 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" containerName="route-controller-manager" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.796847 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" containerName="route-controller-manager" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.800526 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.808731 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.899993 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert\") pod \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca\") pod \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwjgw\" (UniqueName: \"kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw\") pod \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config\") pod \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\" (UID: \"fd7a1836-1221-4f95-a0dd-ab008ba0196b\") " Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900343 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g2f2\" (UniqueName: \"kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.900590 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.901588 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd7a1836-1221-4f95-a0dd-ab008ba0196b" (UID: "fd7a1836-1221-4f95-a0dd-ab008ba0196b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.901609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config" (OuterVolumeSpecName: "config") pod "fd7a1836-1221-4f95-a0dd-ab008ba0196b" (UID: "fd7a1836-1221-4f95-a0dd-ab008ba0196b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.905038 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw" (OuterVolumeSpecName: "kube-api-access-rwjgw") pod "fd7a1836-1221-4f95-a0dd-ab008ba0196b" (UID: "fd7a1836-1221-4f95-a0dd-ab008ba0196b"). InnerVolumeSpecName "kube-api-access-rwjgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.905325 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd7a1836-1221-4f95-a0dd-ab008ba0196b" (UID: "fd7a1836-1221-4f95-a0dd-ab008ba0196b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.922611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z"] Jan 29 11:30:00 crc kubenswrapper[4766]: I0129 11:30:00.959277 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.002706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.003344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g2f2\" (UniqueName: \"kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.003532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.003697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.003838 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.003964 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7a1836-1221-4f95-a0dd-ab008ba0196b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.004060 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7a1836-1221-4f95-a0dd-ab008ba0196b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.004144 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwjgw\" (UniqueName: \"kubernetes.io/projected/fd7a1836-1221-4f95-a0dd-ab008ba0196b-kube-api-access-rwjgw\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.006111 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.006284 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.007214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.024043 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g2f2\" (UniqueName: \"kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2\") pod \"route-controller-manager-f9f5c8867-zc9wf\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.105588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca\") pod \"076f9c64-d14f-4657-a5a4-e2df4808e02c\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.105929 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert\") pod \"076f9c64-d14f-4657-a5a4-e2df4808e02c\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.105986 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles\") pod \"076f9c64-d14f-4657-a5a4-e2df4808e02c\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.106008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52g6m\" (UniqueName: \"kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m\") pod \"076f9c64-d14f-4657-a5a4-e2df4808e02c\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.106058 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config\") pod \"076f9c64-d14f-4657-a5a4-e2df4808e02c\" (UID: \"076f9c64-d14f-4657-a5a4-e2df4808e02c\") " Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.106717 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca" (OuterVolumeSpecName: "client-ca") pod "076f9c64-d14f-4657-a5a4-e2df4808e02c" (UID: "076f9c64-d14f-4657-a5a4-e2df4808e02c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.106763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config" (OuterVolumeSpecName: "config") pod "076f9c64-d14f-4657-a5a4-e2df4808e02c" (UID: "076f9c64-d14f-4657-a5a4-e2df4808e02c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.107203 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "076f9c64-d14f-4657-a5a4-e2df4808e02c" (UID: "076f9c64-d14f-4657-a5a4-e2df4808e02c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.109398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m" (OuterVolumeSpecName: "kube-api-access-52g6m") pod "076f9c64-d14f-4657-a5a4-e2df4808e02c" (UID: "076f9c64-d14f-4657-a5a4-e2df4808e02c"). InnerVolumeSpecName "kube-api-access-52g6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.109639 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "076f9c64-d14f-4657-a5a4-e2df4808e02c" (UID: "076f9c64-d14f-4657-a5a4-e2df4808e02c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.134077 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.206874 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.206897 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/076f9c64-d14f-4657-a5a4-e2df4808e02c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.206906 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.206958 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52g6m\" (UniqueName: \"kubernetes.io/projected/076f9c64-d14f-4657-a5a4-e2df4808e02c-kube-api-access-52g6m\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.206988 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076f9c64-d14f-4657-a5a4-e2df4808e02c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.215777 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" event={"ID":"076f9c64-d14f-4657-a5a4-e2df4808e02c","Type":"ContainerDied","Data":"0b3bd21525fbf28661712f32e2be85c2701eefba174eafdfbca080b03ef321a7"} Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.215839 4766 scope.go:117] "RemoveContainer" containerID="327ccd9dcc41d75087df0c4031b942f012cd8748cc3a8356d2a269639d7de6c3" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.216070 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.219298 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" event={"ID":"833ad5a8-865a-420a-8337-976684a1c9bd","Type":"ContainerStarted","Data":"780923dbbe84115e54dee2b3c4a6af834ec8aaeb3d452c6dd595f19a9fe665fc"} Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.225891 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.231946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx" event={"ID":"fd7a1836-1221-4f95-a0dd-ab008ba0196b","Type":"ContainerDied","Data":"734b256106529ab9b92069e87480d6eabda38b142844f4e6dcffba1aa253dd60"} Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.253714 4766 scope.go:117] "RemoveContainer" containerID="97ba67fdd9e71014eba2cf6ff36f2df2a9d8f66f6da1ed5b7a892fbc9beea268" Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.285132 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.289746 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dc895bfcf-bqtnb"] Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.292670 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.296935 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d678cf5c-k2wlx"] Jan 29 11:30:01 crc kubenswrapper[4766]: I0129 11:30:01.610057 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.232832 4766 generic.go:334] "Generic (PLEG): container finished" podID="833ad5a8-865a-420a-8337-976684a1c9bd" containerID="4dd839fac298626b3660bac0cbeaa67e24c1bf48eefaba2f46a958ba0ffff417" exitCode=0 Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.232915 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" event={"ID":"833ad5a8-865a-420a-8337-976684a1c9bd","Type":"ContainerDied","Data":"4dd839fac298626b3660bac0cbeaa67e24c1bf48eefaba2f46a958ba0ffff417"} Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.237636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" event={"ID":"decf0d8c-7e98-464e-b3e5-fbd6a0856859","Type":"ContainerStarted","Data":"8f27d3e7a4d5d71cedaa1305ba7ed1ad796a1e80a847502f6ddd421ec89d646d"} Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.237678 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" event={"ID":"decf0d8c-7e98-464e-b3e5-fbd6a0856859","Type":"ContainerStarted","Data":"cfff9066b51c84c6f26b5c947816c031943388129dc52b4a82a653996ea638f8"} Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.237827 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.266601 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" podStartSLOduration=3.266578314 podStartE2EDuration="3.266578314s" podCreationTimestamp="2026-01-29 11:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:02.262521225 +0000 UTC m=+539.374914246" watchObservedRunningTime="2026-01-29 11:30:02.266578314 +0000 UTC m=+539.378971345" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.690807 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.839667 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:02 crc kubenswrapper[4766]: E0129 11:30:02.840396 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076f9c64-d14f-4657-a5a4-e2df4808e02c" containerName="controller-manager" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.840433 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="076f9c64-d14f-4657-a5a4-e2df4808e02c" containerName="controller-manager" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.840570 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="076f9c64-d14f-4657-a5a4-e2df4808e02c" containerName="controller-manager" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.841086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.844689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.846689 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.846950 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.847071 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.847303 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.847695 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.852072 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.863799 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.928594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.928677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.928711 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.928757 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4c9\" (UniqueName: \"kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:02 crc kubenswrapper[4766]: I0129 11:30:02.928835 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.030112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.030190 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.030221 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.030239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.030266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4c9\" (UniqueName: \"kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.031288 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.031767 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.032758 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.036458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.047494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4c9\" (UniqueName: \"kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9\") pod \"controller-manager-7456d7f74f-4pfct\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.160854 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.231226 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076f9c64-d14f-4657-a5a4-e2df4808e02c" path="/var/lib/kubelet/pods/076f9c64-d14f-4657-a5a4-e2df4808e02c/volumes" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.232006 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7a1836-1221-4f95-a0dd-ab008ba0196b" path="/var/lib/kubelet/pods/fd7a1836-1221-4f95-a0dd-ab008ba0196b/volumes" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.517643 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.553106 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:03 crc kubenswrapper[4766]: W0129 11:30:03.561158 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2b92849_383c_4876_bb0c_a0895dd534df.slice/crio-2cf98ec6081f6f49e586fce03eddb55529dcd0cdfa35176e463bbaa188e82a11 WatchSource:0}: Error finding container 2cf98ec6081f6f49e586fce03eddb55529dcd0cdfa35176e463bbaa188e82a11: Status 404 returned error can't find the container with id 2cf98ec6081f6f49e586fce03eddb55529dcd0cdfa35176e463bbaa188e82a11 Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.637010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knd6h\" (UniqueName: \"kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h\") pod \"833ad5a8-865a-420a-8337-976684a1c9bd\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.637065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume\") pod \"833ad5a8-865a-420a-8337-976684a1c9bd\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.637107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume\") pod \"833ad5a8-865a-420a-8337-976684a1c9bd\" (UID: \"833ad5a8-865a-420a-8337-976684a1c9bd\") " Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.638348 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "833ad5a8-865a-420a-8337-976684a1c9bd" (UID: "833ad5a8-865a-420a-8337-976684a1c9bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.642223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "833ad5a8-865a-420a-8337-976684a1c9bd" (UID: "833ad5a8-865a-420a-8337-976684a1c9bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.642366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h" (OuterVolumeSpecName: "kube-api-access-knd6h") pod "833ad5a8-865a-420a-8337-976684a1c9bd" (UID: "833ad5a8-865a-420a-8337-976684a1c9bd"). InnerVolumeSpecName "kube-api-access-knd6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.739073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knd6h\" (UniqueName: \"kubernetes.io/projected/833ad5a8-865a-420a-8337-976684a1c9bd-kube-api-access-knd6h\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.739162 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833ad5a8-865a-420a-8337-976684a1c9bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4766]: I0129 11:30:03.739226 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833ad5a8-865a-420a-8337-976684a1c9bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.255156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" event={"ID":"f2b92849-383c-4876-bb0c-a0895dd534df","Type":"ContainerStarted","Data":"4530900a3d56d2b13049b9f60f93cadbdd3ebb3f33c90ea6cffe6ddba2dd895b"} Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.255498 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.255515 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" event={"ID":"f2b92849-383c-4876-bb0c-a0895dd534df","Type":"ContainerStarted","Data":"2cf98ec6081f6f49e586fce03eddb55529dcd0cdfa35176e463bbaa188e82a11"} Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.256810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.256820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z" event={"ID":"833ad5a8-865a-420a-8337-976684a1c9bd","Type":"ContainerDied","Data":"780923dbbe84115e54dee2b3c4a6af834ec8aaeb3d452c6dd595f19a9fe665fc"} Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.256865 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780923dbbe84115e54dee2b3c4a6af834ec8aaeb3d452c6dd595f19a9fe665fc" Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.260821 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:04 crc kubenswrapper[4766]: I0129 11:30:04.278250 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" podStartSLOduration=5.278230708 podStartE2EDuration="5.278230708s" podCreationTimestamp="2026-01-29 11:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:04.274077007 +0000 UTC m=+541.386470038" watchObservedRunningTime="2026-01-29 11:30:04.278230708 +0000 UTC m=+541.390623729" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.542049 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.542792 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9bpkx" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="registry-server" containerID="cri-o://9128a9bb705d8143f3e3b108dd9b69778f90d66fccfea3699ac54c69b6a3bd5c" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.561401 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.562595 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tx9nf" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="registry-server" containerID="cri-o://c49a5657f4047d3b4ebc585eeb00c9ca7a83e764b486c9e6912d17a4a490c00a" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.568996 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.571677 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" containerID="cri-o://11b096c9f2105a2d593c3bc6034399a160aeb36772d70712f82e2a14692dc61a" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.580966 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.581331 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plg8c" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="registry-server" containerID="cri-o://f13e58f3874e0f03a18028a1f078d889b05c172817f3173a7e8156921e66571a" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.607527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.607872 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gnkm" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="registry-server" containerID="cri-o://ebd6058e4f4c04ae01f565703745d5c00713a10ea2c182e01278af2c2a57b87c" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.613812 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56hgk"] Jan 29 11:30:19 crc kubenswrapper[4766]: E0129 11:30:19.614051 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833ad5a8-865a-420a-8337-976684a1c9bd" containerName="collect-profiles" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.614068 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="833ad5a8-865a-420a-8337-976684a1c9bd" containerName="collect-profiles" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.614176 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="833ad5a8-865a-420a-8337-976684a1c9bd" containerName="collect-profiles" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.614670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.619825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56hgk"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.678696 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.678906 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerName="route-controller-manager" containerID="cri-o://8f27d3e7a4d5d71cedaa1305ba7ed1ad796a1e80a847502f6ddd421ec89d646d" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.691539 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.691769 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" podUID="f2b92849-383c-4876-bb0c-a0895dd534df" containerName="controller-manager" containerID="cri-o://4530900a3d56d2b13049b9f60f93cadbdd3ebb3f33c90ea6cffe6ddba2dd895b" gracePeriod=30 Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.752572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrz7n\" (UniqueName: \"kubernetes.io/projected/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-kube-api-access-vrz7n\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.752636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.752664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.853830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrz7n\" (UniqueName: \"kubernetes.io/projected/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-kube-api-access-vrz7n\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.853914 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.853961 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.855971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.860150 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.872264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrz7n\" (UniqueName: \"kubernetes.io/projected/eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8-kube-api-access-vrz7n\") pod \"marketplace-operator-79b997595-56hgk\" (UID: \"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:19 crc kubenswrapper[4766]: I0129 11:30:19.936790 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.340463 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a615f4a-f498-4abb-be15-10f224ff84df" containerID="f13e58f3874e0f03a18028a1f078d889b05c172817f3173a7e8156921e66571a" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.340802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerDied","Data":"f13e58f3874e0f03a18028a1f078d889b05c172817f3173a7e8156921e66571a"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.342544 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerID="ebd6058e4f4c04ae01f565703745d5c00713a10ea2c182e01278af2c2a57b87c" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.342596 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerDied","Data":"ebd6058e4f4c04ae01f565703745d5c00713a10ea2c182e01278af2c2a57b87c"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.344419 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztc7c_72cf9723-cba4-4f3b-90c4-c8b919e9b7a8/marketplace-operator/1.log" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.344448 4766 generic.go:334] "Generic (PLEG): container finished" podID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerID="11b096c9f2105a2d593c3bc6034399a160aeb36772d70712f82e2a14692dc61a" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.344479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerDied","Data":"11b096c9f2105a2d593c3bc6034399a160aeb36772d70712f82e2a14692dc61a"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.344502 4766 scope.go:117] "RemoveContainer" containerID="46f4a914955b4dfe3a60ec8a9123964661868d9be400d92a50e1ac527cf7e93c" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.347108 4766 generic.go:334] "Generic (PLEG): container finished" podID="f2b92849-383c-4876-bb0c-a0895dd534df" containerID="4530900a3d56d2b13049b9f60f93cadbdd3ebb3f33c90ea6cffe6ddba2dd895b" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.347161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" event={"ID":"f2b92849-383c-4876-bb0c-a0895dd534df","Type":"ContainerDied","Data":"4530900a3d56d2b13049b9f60f93cadbdd3ebb3f33c90ea6cffe6ddba2dd895b"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.353500 4766 generic.go:334] "Generic (PLEG): container finished" podID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerID="9128a9bb705d8143f3e3b108dd9b69778f90d66fccfea3699ac54c69b6a3bd5c" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.353552 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerDied","Data":"9128a9bb705d8143f3e3b108dd9b69778f90d66fccfea3699ac54c69b6a3bd5c"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.355752 4766 generic.go:334] "Generic (PLEG): container finished" podID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerID="c49a5657f4047d3b4ebc585eeb00c9ca7a83e764b486c9e6912d17a4a490c00a" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.355813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerDied","Data":"c49a5657f4047d3b4ebc585eeb00c9ca7a83e764b486c9e6912d17a4a490c00a"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.357801 4766 generic.go:334] "Generic (PLEG): container finished" podID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerID="8f27d3e7a4d5d71cedaa1305ba7ed1ad796a1e80a847502f6ddd421ec89d646d" exitCode=0 Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.357872 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" event={"ID":"decf0d8c-7e98-464e-b3e5-fbd6a0856859","Type":"ContainerDied","Data":"8f27d3e7a4d5d71cedaa1305ba7ed1ad796a1e80a847502f6ddd421ec89d646d"} Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.386426 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56hgk"] Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.567988 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.766438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sngcf\" (UniqueName: \"kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf\") pod \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.766546 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities\") pod \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.766573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content\") pod \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\" (UID: \"d4adf06b-9f3e-42f1-b70f-31ec39923b11\") " Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.769255 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities" (OuterVolumeSpecName: "utilities") pod "d4adf06b-9f3e-42f1-b70f-31ec39923b11" (UID: "d4adf06b-9f3e-42f1-b70f-31ec39923b11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.776590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf" (OuterVolumeSpecName: "kube-api-access-sngcf") pod "d4adf06b-9f3e-42f1-b70f-31ec39923b11" (UID: "d4adf06b-9f3e-42f1-b70f-31ec39923b11"). InnerVolumeSpecName "kube-api-access-sngcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.868037 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sngcf\" (UniqueName: \"kubernetes.io/projected/d4adf06b-9f3e-42f1-b70f-31ec39923b11-kube-api-access-sngcf\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.868083 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.881092 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4adf06b-9f3e-42f1-b70f-31ec39923b11" (UID: "d4adf06b-9f3e-42f1-b70f-31ec39923b11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.931610 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.969137 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4adf06b-9f3e-42f1-b70f-31ec39923b11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.989200 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:30:20 crc kubenswrapper[4766]: I0129 11:30:20.995049 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.070031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content\") pod \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.070141 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k7zl\" (UniqueName: \"kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl\") pod \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.070185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities\") pod \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\" (UID: \"43d854e2-61c5-46d0-a85f-575c5fc51fa4\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.071308 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities" (OuterVolumeSpecName: "utilities") pod "43d854e2-61c5-46d0-a85f-575c5fc51fa4" (UID: "43d854e2-61c5-46d0-a85f-575c5fc51fa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.107641 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl" (OuterVolumeSpecName: "kube-api-access-8k7zl") pod "43d854e2-61c5-46d0-a85f-575c5fc51fa4" (UID: "43d854e2-61c5-46d0-a85f-575c5fc51fa4"). InnerVolumeSpecName "kube-api-access-8k7zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.117743 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43d854e2-61c5-46d0-a85f-575c5fc51fa4" (UID: "43d854e2-61c5-46d0-a85f-575c5fc51fa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.170991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs67p\" (UniqueName: \"kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p\") pod \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.174955 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities\") pod \"8a615f4a-f498-4abb-be15-10f224ff84df\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175001 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") pod \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content\") pod \"8a615f4a-f498-4abb-be15-10f224ff84df\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175171 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") pod \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\" (UID: \"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175197 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8fvz\" (UniqueName: \"kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz\") pod \"8a615f4a-f498-4abb-be15-10f224ff84df\" (UID: \"8a615f4a-f498-4abb-be15-10f224ff84df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175619 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175647 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k7zl\" (UniqueName: \"kubernetes.io/projected/43d854e2-61c5-46d0-a85f-575c5fc51fa4-kube-api-access-8k7zl\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.175662 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d854e2-61c5-46d0-a85f-575c5fc51fa4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.176209 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities" (OuterVolumeSpecName: "utilities") pod "8a615f4a-f498-4abb-be15-10f224ff84df" (UID: "8a615f4a-f498-4abb-be15-10f224ff84df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.176647 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" (UID: "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.179830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" (UID: "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.179968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p" (OuterVolumeSpecName: "kube-api-access-vs67p") pod "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" (UID: "72cf9723-cba4-4f3b-90c4-c8b919e9b7a8"). InnerVolumeSpecName "kube-api-access-vs67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.185058 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz" (OuterVolumeSpecName: "kube-api-access-k8fvz") pod "8a615f4a-f498-4abb-be15-10f224ff84df" (UID: "8a615f4a-f498-4abb-be15-10f224ff84df"). InnerVolumeSpecName "kube-api-access-k8fvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.222981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a615f4a-f498-4abb-be15-10f224ff84df" (UID: "8a615f4a-f498-4abb-be15-10f224ff84df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.270505 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276345 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs67p\" (UniqueName: \"kubernetes.io/projected/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-kube-api-access-vs67p\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276374 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276385 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276393 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a615f4a-f498-4abb-be15-10f224ff84df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276401 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.276438 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8fvz\" (UniqueName: \"kubernetes.io/projected/8a615f4a-f498-4abb-be15-10f224ff84df-kube-api-access-k8fvz\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.371102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tx9nf" event={"ID":"43d854e2-61c5-46d0-a85f-575c5fc51fa4","Type":"ContainerDied","Data":"5654ee4b659fbb76e08f89badb45e822f714a5a12154687843272495e574036b"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.371152 4766 scope.go:117] "RemoveContainer" containerID="c49a5657f4047d3b4ebc585eeb00c9ca7a83e764b486c9e6912d17a4a490c00a" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.371242 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tx9nf" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.379750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plg8c" event={"ID":"8a615f4a-f498-4abb-be15-10f224ff84df","Type":"ContainerDied","Data":"52771bf84dddca314e9a078755ec0bf804526b6d94e76edfdd320b028d2fe2a5"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.379886 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plg8c" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.399320 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" event={"ID":"72cf9723-cba4-4f3b-90c4-c8b919e9b7a8","Type":"ContainerDied","Data":"41d6d9c1c1c95cdb4096f171331d6a959b622d98890a28f387570ac099e40b89"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.399461 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztc7c" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.406624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" event={"ID":"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8","Type":"ContainerStarted","Data":"83b9a2e7e40cd81c391855788b8572a48ffa1c8fc30bdb5aedf7103d006cf9e0"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.406662 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" event={"ID":"eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8","Type":"ContainerStarted","Data":"616e94837d5b0f6998d28e8e1c1c9ad7ecf6198734272887f9636d32b3456a28"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.407477 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.408508 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.422455 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bpkx" event={"ID":"d4adf06b-9f3e-42f1-b70f-31ec39923b11","Type":"ContainerDied","Data":"8a479e2d161106294cc3ea7147073f0d6f7bbe474c7abbb090daa657810089ff"} Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.422540 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bpkx" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.423720 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.424010 4766 scope.go:117] "RemoveContainer" containerID="038f1419e5983fb3b980bd0ccfa90f74b513f612ba1990f8f629f58637ee9b7d" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.429839 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.442493 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tx9nf"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.459344 4766 scope.go:117] "RemoveContainer" containerID="08f9491f35bea61381087a8d134ae369a9246a6aa4a3f8747455304a4df9011d" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.461385 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.474490 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plg8c"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.476406 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-56hgk" podStartSLOduration=2.476386686 podStartE2EDuration="2.476386686s" podCreationTimestamp="2026-01-29 11:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:21.453648161 +0000 UTC m=+558.566041172" watchObservedRunningTime="2026-01-29 11:30:21.476386686 +0000 UTC m=+558.588779697" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.477397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca\") pod \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.477445 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g2f2\" (UniqueName: \"kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2\") pod \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.477484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config\") pod \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.477519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert\") pod \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\" (UID: \"decf0d8c-7e98-464e-b3e5-fbd6a0856859\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.478842 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca" (OuterVolumeSpecName: "client-ca") pod "decf0d8c-7e98-464e-b3e5-fbd6a0856859" (UID: "decf0d8c-7e98-464e-b3e5-fbd6a0856859"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.479605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config" (OuterVolumeSpecName: "config") pod "decf0d8c-7e98-464e-b3e5-fbd6a0856859" (UID: "decf0d8c-7e98-464e-b3e5-fbd6a0856859"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.486770 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2" (OuterVolumeSpecName: "kube-api-access-2g2f2") pod "decf0d8c-7e98-464e-b3e5-fbd6a0856859" (UID: "decf0d8c-7e98-464e-b3e5-fbd6a0856859"). InnerVolumeSpecName "kube-api-access-2g2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.487478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "decf0d8c-7e98-464e-b3e5-fbd6a0856859" (UID: "decf0d8c-7e98-464e-b3e5-fbd6a0856859"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.494578 4766 scope.go:117] "RemoveContainer" containerID="f13e58f3874e0f03a18028a1f078d889b05c172817f3173a7e8156921e66571a" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.498149 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.526828 4766 scope.go:117] "RemoveContainer" containerID="10ce103542d68cdff3ae408e7daf504046172cf50410cd7d3b206abb459276ea" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.527134 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.528481 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9bpkx"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.545930 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.547970 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.557223 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztc7c"] Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.581622 4766 scope.go:117] "RemoveContainer" containerID="fcd41e02de378fb0deba2f12849c26b4203b13d95e12945cf0fdb8b47d5a7e0c" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.581657 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles\") pod \"f2b92849-383c-4876-bb0c-a0895dd534df\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.581978 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.582011 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g2f2\" (UniqueName: \"kubernetes.io/projected/decf0d8c-7e98-464e-b3e5-fbd6a0856859-kube-api-access-2g2f2\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.582037 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decf0d8c-7e98-464e-b3e5-fbd6a0856859-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.582072 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decf0d8c-7e98-464e-b3e5-fbd6a0856859-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.585661 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2b92849-383c-4876-bb0c-a0895dd534df" (UID: "f2b92849-383c-4876-bb0c-a0895dd534df"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.619973 4766 scope.go:117] "RemoveContainer" containerID="11b096c9f2105a2d593c3bc6034399a160aeb36772d70712f82e2a14692dc61a" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.647296 4766 scope.go:117] "RemoveContainer" containerID="9128a9bb705d8143f3e3b108dd9b69778f90d66fccfea3699ac54c69b6a3bd5c" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.682807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content\") pod \"8a36521a-d4cf-4c8e-8dbe-61599b472068\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.682902 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config\") pod \"f2b92849-383c-4876-bb0c-a0895dd534df\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.682942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4c9\" (UniqueName: \"kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9\") pod \"f2b92849-383c-4876-bb0c-a0895dd534df\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.682984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca\") pod \"f2b92849-383c-4876-bb0c-a0895dd534df\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.683041 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities\") pod \"8a36521a-d4cf-4c8e-8dbe-61599b472068\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.683084 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2sg8\" (UniqueName: \"kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8\") pod \"8a36521a-d4cf-4c8e-8dbe-61599b472068\" (UID: \"8a36521a-d4cf-4c8e-8dbe-61599b472068\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.683120 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert\") pod \"f2b92849-383c-4876-bb0c-a0895dd534df\" (UID: \"f2b92849-383c-4876-bb0c-a0895dd534df\") " Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.684271 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2b92849-383c-4876-bb0c-a0895dd534df" (UID: "f2b92849-383c-4876-bb0c-a0895dd534df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.684305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config" (OuterVolumeSpecName: "config") pod "f2b92849-383c-4876-bb0c-a0895dd534df" (UID: "f2b92849-383c-4876-bb0c-a0895dd534df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.685097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities" (OuterVolumeSpecName: "utilities") pod "8a36521a-d4cf-4c8e-8dbe-61599b472068" (UID: "8a36521a-d4cf-4c8e-8dbe-61599b472068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.689026 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8" (OuterVolumeSpecName: "kube-api-access-x2sg8") pod "8a36521a-d4cf-4c8e-8dbe-61599b472068" (UID: "8a36521a-d4cf-4c8e-8dbe-61599b472068"). InnerVolumeSpecName "kube-api-access-x2sg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.689530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2b92849-383c-4876-bb0c-a0895dd534df" (UID: "f2b92849-383c-4876-bb0c-a0895dd534df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.690869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9" (OuterVolumeSpecName: "kube-api-access-rj4c9") pod "f2b92849-383c-4876-bb0c-a0895dd534df" (UID: "f2b92849-383c-4876-bb0c-a0895dd534df"). InnerVolumeSpecName "kube-api-access-rj4c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691821 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691851 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2sg8\" (UniqueName: \"kubernetes.io/projected/8a36521a-d4cf-4c8e-8dbe-61599b472068-kube-api-access-x2sg8\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691868 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b92849-383c-4876-bb0c-a0895dd534df-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691881 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691893 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691904 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4c9\" (UniqueName: \"kubernetes.io/projected/f2b92849-383c-4876-bb0c-a0895dd534df-kube-api-access-rj4c9\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.691916 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92849-383c-4876-bb0c-a0895dd534df-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.729658 4766 scope.go:117] "RemoveContainer" containerID="3b45b97ee064487185914290de86cdeb80cde56edce4d24d25e4ec123d5c4723" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.751754 4766 scope.go:117] "RemoveContainer" containerID="ac107e2fb5b881912697082fa61f68bcf9262d11269b42b31eb876a18ec2b5e0" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.829090 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a36521a-d4cf-4c8e-8dbe-61599b472068" (UID: "8a36521a-d4cf-4c8e-8dbe-61599b472068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:30:21 crc kubenswrapper[4766]: I0129 11:30:21.894242 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36521a-d4cf-4c8e-8dbe-61599b472068-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.135384 4766 patch_prober.go:28] interesting pod/route-controller-manager-f9f5c8867-zc9wf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.135481 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156143 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6zxp"] Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156403 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156454 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156468 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156476 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156495 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156503 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156513 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156526 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156534 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156546 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerName="route-controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156554 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerName="route-controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156564 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b92849-383c-4876-bb0c-a0895dd534df" containerName="controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156572 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b92849-383c-4876-bb0c-a0895dd534df" containerName="controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156584 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156592 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156600 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156608 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156619 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156626 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156636 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156643 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156653 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156661 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156672 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156679 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156686 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156693 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156703 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156711 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="extract-content" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.156720 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156727 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="extract-utilities" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156831 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b92849-383c-4876-bb0c-a0895dd534df" containerName="controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156847 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156857 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156866 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156879 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156889 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156897 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" containerName="route-controller-manager" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.156909 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" containerName="registry-server" Jan 29 11:30:22 crc kubenswrapper[4766]: E0129 11:30:22.157002 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.157012 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.157120 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" containerName="marketplace-operator" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.157671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.160486 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.169603 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6zxp"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.197046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-catalog-content\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.197105 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxdj\" (UniqueName: \"kubernetes.io/projected/498c7200-d206-4ace-8627-99ae72a379ce-kube-api-access-xxxdj\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.197138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-utilities\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.298513 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-utilities\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.298598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-catalog-content\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.298697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxdj\" (UniqueName: \"kubernetes.io/projected/498c7200-d206-4ace-8627-99ae72a379ce-kube-api-access-xxxdj\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.299504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-catalog-content\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.299644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498c7200-d206-4ace-8627-99ae72a379ce-utilities\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.317279 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxdj\" (UniqueName: \"kubernetes.io/projected/498c7200-d206-4ace-8627-99ae72a379ce-kube-api-access-xxxdj\") pod \"certified-operators-c6zxp\" (UID: \"498c7200-d206-4ace-8627-99ae72a379ce\") " pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.428913 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" event={"ID":"f2b92849-383c-4876-bb0c-a0895dd534df","Type":"ContainerDied","Data":"2cf98ec6081f6f49e586fce03eddb55529dcd0cdfa35176e463bbaa188e82a11"} Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.429001 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7456d7f74f-4pfct" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.429033 4766 scope.go:117] "RemoveContainer" containerID="4530900a3d56d2b13049b9f60f93cadbdd3ebb3f33c90ea6cffe6ddba2dd895b" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.433941 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" event={"ID":"decf0d8c-7e98-464e-b3e5-fbd6a0856859","Type":"ContainerDied","Data":"cfff9066b51c84c6f26b5c947816c031943388129dc52b4a82a653996ea638f8"} Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.433989 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.439157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gnkm" event={"ID":"8a36521a-d4cf-4c8e-8dbe-61599b472068","Type":"ContainerDied","Data":"c3678c7dde3b21a4082f3c5916dcaa0338b5a338bb2a36d1bc754bdd618a7bbf"} Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.439293 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gnkm" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.443643 4766 scope.go:117] "RemoveContainer" containerID="8f27d3e7a4d5d71cedaa1305ba7ed1ad796a1e80a847502f6ddd421ec89d646d" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.458740 4766 scope.go:117] "RemoveContainer" containerID="ebd6058e4f4c04ae01f565703745d5c00713a10ea2c182e01278af2c2a57b87c" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.494002 4766 scope.go:117] "RemoveContainer" containerID="7c6847a659cf8ddc25326f6f6250201535668cecbf34731d409726760a0c7c65" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.494596 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.501111 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9f5c8867-zc9wf"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.510515 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.510600 4766 scope.go:117] "RemoveContainer" containerID="b756eb040b45cb3adb12677d2ba3e909cc54ab18c5026320c95bd50d8829b045" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.513298 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7456d7f74f-4pfct"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.517305 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.521253 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gnkm"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.529809 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.851663 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588df8dfb8-tpplz"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.852832 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.855276 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.856285 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.857189 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.857245 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.857557 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.857664 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.859845 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.861768 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.865062 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.866329 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.866567 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.866709 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.866881 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.867011 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.867175 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.870638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588df8dfb8-tpplz"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.874559 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk"] Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906660 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-config\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-client-ca\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906747 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42shh\" (UniqueName: \"kubernetes.io/projected/e218e976-ee08-4256-967a-4312beef6d6e-kube-api-access-42shh\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-proxy-ca-bundles\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-client-ca\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e218e976-ee08-4256-967a-4312beef6d6e-serving-cert\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906893 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-config\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4vh\" (UniqueName: \"kubernetes.io/projected/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-kube-api-access-vp4vh\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.906942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-serving-cert\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:22 crc kubenswrapper[4766]: I0129 11:30:22.939408 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6zxp"] Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008099 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-config\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-client-ca\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42shh\" (UniqueName: \"kubernetes.io/projected/e218e976-ee08-4256-967a-4312beef6d6e-kube-api-access-42shh\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008347 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-proxy-ca-bundles\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-client-ca\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e218e976-ee08-4256-967a-4312beef6d6e-serving-cert\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-config\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008468 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4vh\" (UniqueName: \"kubernetes.io/projected/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-kube-api-access-vp4vh\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.008495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-serving-cert\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.010091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-config\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.010229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-client-ca\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.010854 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-client-ca\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.011099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e218e976-ee08-4256-967a-4312beef6d6e-config\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.012362 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-proxy-ca-bundles\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.016132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e218e976-ee08-4256-967a-4312beef6d6e-serving-cert\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.019687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-serving-cert\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.028233 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4vh\" (UniqueName: \"kubernetes.io/projected/a9a7f03c-3958-4c03-9d33-9ed4978d88dd-kube-api-access-vp4vh\") pod \"controller-manager-588df8dfb8-tpplz\" (UID: \"a9a7f03c-3958-4c03-9d33-9ed4978d88dd\") " pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.029164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42shh\" (UniqueName: \"kubernetes.io/projected/e218e976-ee08-4256-967a-4312beef6d6e-kube-api-access-42shh\") pod \"route-controller-manager-5ff5f95476-v7gdk\" (UID: \"e218e976-ee08-4256-967a-4312beef6d6e\") " pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.192926 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.202335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.245684 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d854e2-61c5-46d0-a85f-575c5fc51fa4" path="/var/lib/kubelet/pods/43d854e2-61c5-46d0-a85f-575c5fc51fa4/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.253373 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cf9723-cba4-4f3b-90c4-c8b919e9b7a8" path="/var/lib/kubelet/pods/72cf9723-cba4-4f3b-90c4-c8b919e9b7a8/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.255302 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a36521a-d4cf-4c8e-8dbe-61599b472068" path="/var/lib/kubelet/pods/8a36521a-d4cf-4c8e-8dbe-61599b472068/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.256564 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a615f4a-f498-4abb-be15-10f224ff84df" path="/var/lib/kubelet/pods/8a615f4a-f498-4abb-be15-10f224ff84df/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.259939 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4adf06b-9f3e-42f1-b70f-31ec39923b11" path="/var/lib/kubelet/pods/d4adf06b-9f3e-42f1-b70f-31ec39923b11/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.261313 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decf0d8c-7e98-464e-b3e5-fbd6a0856859" path="/var/lib/kubelet/pods/decf0d8c-7e98-464e-b3e5-fbd6a0856859/volumes" Jan 29 11:30:23 crc kubenswrapper[4766]: I0129 11:30:23.263213 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b92849-383c-4876-bb0c-a0895dd534df" path="/var/lib/kubelet/pods/f2b92849-383c-4876-bb0c-a0895dd534df/volumes" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.453522 4766 generic.go:334] "Generic (PLEG): container finished" podID="498c7200-d206-4ace-8627-99ae72a379ce" containerID="12a403820cf236101b36cc928fefc95140f651d97bfaaf3753e4711ee86f71df" exitCode=0 Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.454332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zxp" event={"ID":"498c7200-d206-4ace-8627-99ae72a379ce","Type":"ContainerDied","Data":"12a403820cf236101b36cc928fefc95140f651d97bfaaf3753e4711ee86f71df"} Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.454358 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zxp" event={"ID":"498c7200-d206-4ace-8627-99ae72a379ce","Type":"ContainerStarted","Data":"52e302cc630e7fb4d9fe10c38d20e97aa673e3b1cc720bd7f9994eb5a4b3a54d"} Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.455993 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.556947 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l554z"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.558122 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.559959 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.569566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l554z"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.616199 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-utilities\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.616247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsx7\" (UniqueName: \"kubernetes.io/projected/0253132b-88f5-4f77-8bd4-5effddcdd170-kube-api-access-7rsx7\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.616321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-catalog-content\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.717880 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-catalog-content\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.717951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-utilities\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.717991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rsx7\" (UniqueName: \"kubernetes.io/projected/0253132b-88f5-4f77-8bd4-5effddcdd170-kube-api-access-7rsx7\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.718544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-utilities\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.719550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0253132b-88f5-4f77-8bd4-5effddcdd170-catalog-content\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.736483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rsx7\" (UniqueName: \"kubernetes.io/projected/0253132b-88f5-4f77-8bd4-5effddcdd170-kube-api-access-7rsx7\") pod \"redhat-operators-l554z\" (UID: \"0253132b-88f5-4f77-8bd4-5effddcdd170\") " pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:23.875876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.560899 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bf677"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.561828 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.564864 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.570894 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf677"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.626981 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-catalog-content\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.627057 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-utilities\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.627117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mvf\" (UniqueName: \"kubernetes.io/projected/c25ea5fb-edce-471f-a010-c07f32090ee8-kube-api-access-b8mvf\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.728093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mvf\" (UniqueName: \"kubernetes.io/projected/c25ea5fb-edce-471f-a010-c07f32090ee8-kube-api-access-b8mvf\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.728174 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-catalog-content\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.728227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-utilities\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.729023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-catalog-content\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.729062 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25ea5fb-edce-471f-a010-c07f32090ee8-utilities\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.745248 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mvf\" (UniqueName: \"kubernetes.io/projected/c25ea5fb-edce-471f-a010-c07f32090ee8-kube-api-access-b8mvf\") pod \"community-operators-bf677\" (UID: \"c25ea5fb-edce-471f-a010-c07f32090ee8\") " pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:24.883865 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.823437 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588df8dfb8-tpplz"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.838983 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l554z"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.843172 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.847314 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf677"] Jan 29 11:30:25 crc kubenswrapper[4766]: W0129 11:30:25.860974 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode218e976_ee08_4256_967a_4312beef6d6e.slice/crio-e1f94bac7b22c6682406695b0d071acb3822fad2e61b1d2a5a3cf0addf1ec34f WatchSource:0}: Error finding container e1f94bac7b22c6682406695b0d071acb3822fad2e61b1d2a5a3cf0addf1ec34f: Status 404 returned error can't find the container with id e1f94bac7b22c6682406695b0d071acb3822fad2e61b1d2a5a3cf0addf1ec34f Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.962038 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gq4m"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.964118 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.969036 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gq4m"] Jan 29 11:30:25 crc kubenswrapper[4766]: I0129 11:30:25.970158 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.144311 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-catalog-content\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.144399 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbzxq\" (UniqueName: \"kubernetes.io/projected/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-kube-api-access-tbzxq\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.144470 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-utilities\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.246023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbzxq\" (UniqueName: \"kubernetes.io/projected/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-kube-api-access-tbzxq\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.246073 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-utilities\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.246134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-catalog-content\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.246707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-utilities\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.246889 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-catalog-content\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.266320 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbzxq\" (UniqueName: \"kubernetes.io/projected/f3a5d5d4-41a2-4d0d-b915-aff5d9200703-kube-api-access-tbzxq\") pod \"redhat-marketplace-7gq4m\" (UID: \"f3a5d5d4-41a2-4d0d-b915-aff5d9200703\") " pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.400563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.481190 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" event={"ID":"a9a7f03c-3958-4c03-9d33-9ed4978d88dd","Type":"ContainerStarted","Data":"aeebf6048f1c77717ddf6138405b310168f2c9a659f0833f4602237db294c48e"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.481248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" event={"ID":"a9a7f03c-3958-4c03-9d33-9ed4978d88dd","Type":"ContainerStarted","Data":"02436af1932f320423ca8a8ce517ce0fe903e30b35d0479fc3e701564239c98e"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.481476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.490268 4766 generic.go:334] "Generic (PLEG): container finished" podID="c25ea5fb-edce-471f-a010-c07f32090ee8" containerID="13036fc752cc50788008ced2049692e74b1aec973bdd9c77edf56f9229eda688" exitCode=0 Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.490368 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf677" event={"ID":"c25ea5fb-edce-471f-a010-c07f32090ee8","Type":"ContainerDied","Data":"13036fc752cc50788008ced2049692e74b1aec973bdd9c77edf56f9229eda688"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.490398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf677" event={"ID":"c25ea5fb-edce-471f-a010-c07f32090ee8","Type":"ContainerStarted","Data":"ea4760ed115d8304437d50e8f6b3155fe75bc1a51977308a8cbc1dbe337533f7"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.493311 4766 generic.go:334] "Generic (PLEG): container finished" podID="498c7200-d206-4ace-8627-99ae72a379ce" containerID="e52c346d5c352dbe98573c37f1b20654473c1b197dc37cd6b22dec1135627734" exitCode=0 Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.493543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zxp" event={"ID":"498c7200-d206-4ace-8627-99ae72a379ce","Type":"ContainerDied","Data":"e52c346d5c352dbe98573c37f1b20654473c1b197dc37cd6b22dec1135627734"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.499608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" event={"ID":"e218e976-ee08-4256-967a-4312beef6d6e","Type":"ContainerStarted","Data":"c780e8b348c5dcf22faa985ea9cd88903cebc39c22a144fc645dc5222c44d796"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.499660 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" event={"ID":"e218e976-ee08-4256-967a-4312beef6d6e","Type":"ContainerStarted","Data":"e1f94bac7b22c6682406695b0d071acb3822fad2e61b1d2a5a3cf0addf1ec34f"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.499999 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.501983 4766 generic.go:334] "Generic (PLEG): container finished" podID="0253132b-88f5-4f77-8bd4-5effddcdd170" containerID="a85db39a63e9a87735aece87e9f69677f7567400a4611e43281b972254650515" exitCode=0 Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.502021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l554z" event={"ID":"0253132b-88f5-4f77-8bd4-5effddcdd170","Type":"ContainerDied","Data":"a85db39a63e9a87735aece87e9f69677f7567400a4611e43281b972254650515"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.502065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l554z" event={"ID":"0253132b-88f5-4f77-8bd4-5effddcdd170","Type":"ContainerStarted","Data":"0b03a9826dd18307fe28b09899368630ae8c8c450b5b55531ce385d4b58a6f3a"} Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.520491 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" podStartSLOduration=7.520473865 podStartE2EDuration="7.520473865s" podCreationTimestamp="2026-01-29 11:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:26.519177037 +0000 UTC m=+563.631570048" watchObservedRunningTime="2026-01-29 11:30:26.520473865 +0000 UTC m=+563.632866896" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.592201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588df8dfb8-tpplz" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.618915 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" Jan 29 11:30:26 crc kubenswrapper[4766]: I0129 11:30:26.665183 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff5f95476-v7gdk" podStartSLOduration=7.665160769 podStartE2EDuration="7.665160769s" podCreationTimestamp="2026-01-29 11:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:26.662248264 +0000 UTC m=+563.774641285" watchObservedRunningTime="2026-01-29 11:30:26.665160769 +0000 UTC m=+563.777553780" Jan 29 11:30:27 crc kubenswrapper[4766]: I0129 11:30:27.057119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gq4m"] Jan 29 11:30:27 crc kubenswrapper[4766]: I0129 11:30:27.511048 4766 generic.go:334] "Generic (PLEG): container finished" podID="f3a5d5d4-41a2-4d0d-b915-aff5d9200703" containerID="d280c299c298fcc207e46adaee0a0bb6da3a6d92f8170225b4857063c29b907c" exitCode=0 Jan 29 11:30:27 crc kubenswrapper[4766]: I0129 11:30:27.511222 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gq4m" event={"ID":"f3a5d5d4-41a2-4d0d-b915-aff5d9200703","Type":"ContainerDied","Data":"d280c299c298fcc207e46adaee0a0bb6da3a6d92f8170225b4857063c29b907c"} Jan 29 11:30:27 crc kubenswrapper[4766]: I0129 11:30:27.511756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gq4m" event={"ID":"f3a5d5d4-41a2-4d0d-b915-aff5d9200703","Type":"ContainerStarted","Data":"f948aa6ee0b7e34023f013232254da785265a8310b2bda0df78f239bfa95b47a"} Jan 29 11:30:28 crc kubenswrapper[4766]: I0129 11:30:28.522938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zxp" event={"ID":"498c7200-d206-4ace-8627-99ae72a379ce","Type":"ContainerStarted","Data":"3bee75038c583d24f089c3d0097363c32605eccf1117ed6c805597da443bf79f"} Jan 29 11:30:28 crc kubenswrapper[4766]: I0129 11:30:28.546547 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6zxp" podStartSLOduration=2.207893284 podStartE2EDuration="6.54652859s" podCreationTimestamp="2026-01-29 11:30:22 +0000 UTC" firstStartedPulling="2026-01-29 11:30:23.455714435 +0000 UTC m=+560.568107446" lastFinishedPulling="2026-01-29 11:30:27.794349741 +0000 UTC m=+564.906742752" observedRunningTime="2026-01-29 11:30:28.545388187 +0000 UTC m=+565.657781218" watchObservedRunningTime="2026-01-29 11:30:28.54652859 +0000 UTC m=+565.658921601" Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.529325 4766 generic.go:334] "Generic (PLEG): container finished" podID="c25ea5fb-edce-471f-a010-c07f32090ee8" containerID="4636251dfe2f60b6a0b2f78d4ccdb88f1966510db07b6ef31c54f68104c4a85b" exitCode=0 Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.529429 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf677" event={"ID":"c25ea5fb-edce-471f-a010-c07f32090ee8","Type":"ContainerDied","Data":"4636251dfe2f60b6a0b2f78d4ccdb88f1966510db07b6ef31c54f68104c4a85b"} Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.533559 4766 generic.go:334] "Generic (PLEG): container finished" podID="f3a5d5d4-41a2-4d0d-b915-aff5d9200703" containerID="892af54c905d01ccd795b19ee98f7b37a4e1235e6831c05ef1798a40adb52f19" exitCode=0 Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.533771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gq4m" event={"ID":"f3a5d5d4-41a2-4d0d-b915-aff5d9200703","Type":"ContainerDied","Data":"892af54c905d01ccd795b19ee98f7b37a4e1235e6831c05ef1798a40adb52f19"} Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.536116 4766 generic.go:334] "Generic (PLEG): container finished" podID="0253132b-88f5-4f77-8bd4-5effddcdd170" containerID="67aa41d9c13ec30a012740513bcbea6e0815283814107f2f511c46f9cffbd074" exitCode=0 Jan 29 11:30:29 crc kubenswrapper[4766]: I0129 11:30:29.536715 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l554z" event={"ID":"0253132b-88f5-4f77-8bd4-5effddcdd170","Type":"ContainerDied","Data":"67aa41d9c13ec30a012740513bcbea6e0815283814107f2f511c46f9cffbd074"} Jan 29 11:30:30 crc kubenswrapper[4766]: I0129 11:30:30.544648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l554z" event={"ID":"0253132b-88f5-4f77-8bd4-5effddcdd170","Type":"ContainerStarted","Data":"86ed3aff68617ecec5e4f4567d1211caf792f99dcde7d68c8802751c8b03c6ab"} Jan 29 11:30:30 crc kubenswrapper[4766]: I0129 11:30:30.573344 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l554z" podStartSLOduration=3.91159575 podStartE2EDuration="7.573325769s" podCreationTimestamp="2026-01-29 11:30:23 +0000 UTC" firstStartedPulling="2026-01-29 11:30:26.503079226 +0000 UTC m=+563.615472237" lastFinishedPulling="2026-01-29 11:30:30.164809245 +0000 UTC m=+567.277202256" observedRunningTime="2026-01-29 11:30:30.569870277 +0000 UTC m=+567.682263298" watchObservedRunningTime="2026-01-29 11:30:30.573325769 +0000 UTC m=+567.685718790" Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.530099 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.530746 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.556692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf677" event={"ID":"c25ea5fb-edce-471f-a010-c07f32090ee8","Type":"ContainerStarted","Data":"81e23ee637a80849e0a688cc147892d2e4ebd05dbec2fc97559ae08148c0f30e"} Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.560054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gq4m" event={"ID":"f3a5d5d4-41a2-4d0d-b915-aff5d9200703","Type":"ContainerStarted","Data":"f379bff2017398c9246b9c40dd0ae11d8ff41afbec694b47307a5b8bb1fd5818"} Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.592532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.600393 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bf677" podStartSLOduration=4.158311921 podStartE2EDuration="8.600372613s" podCreationTimestamp="2026-01-29 11:30:24 +0000 UTC" firstStartedPulling="2026-01-29 11:30:26.491891729 +0000 UTC m=+563.604284740" lastFinishedPulling="2026-01-29 11:30:30.933952421 +0000 UTC m=+568.046345432" observedRunningTime="2026-01-29 11:30:32.578881154 +0000 UTC m=+569.691274155" watchObservedRunningTime="2026-01-29 11:30:32.600372613 +0000 UTC m=+569.712765634" Jan 29 11:30:32 crc kubenswrapper[4766]: I0129 11:30:32.601699 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gq4m" podStartSLOduration=3.734825351 podStartE2EDuration="7.601688901s" podCreationTimestamp="2026-01-29 11:30:25 +0000 UTC" firstStartedPulling="2026-01-29 11:30:27.784227685 +0000 UTC m=+564.896620696" lastFinishedPulling="2026-01-29 11:30:31.651091235 +0000 UTC m=+568.763484246" observedRunningTime="2026-01-29 11:30:32.599859258 +0000 UTC m=+569.712252269" watchObservedRunningTime="2026-01-29 11:30:32.601688901 +0000 UTC m=+569.714081902" Jan 29 11:30:33 crc kubenswrapper[4766]: I0129 11:30:33.876683 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:33 crc kubenswrapper[4766]: I0129 11:30:33.877026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:34 crc kubenswrapper[4766]: I0129 11:30:34.885086 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:34 crc kubenswrapper[4766]: I0129 11:30:34.885154 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:34 crc kubenswrapper[4766]: I0129 11:30:34.922303 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l554z" podUID="0253132b-88f5-4f77-8bd4-5effddcdd170" containerName="registry-server" probeResult="failure" output=< Jan 29 11:30:34 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 11:30:34 crc kubenswrapper[4766]: > Jan 29 11:30:34 crc kubenswrapper[4766]: I0129 11:30:34.931152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:36 crc kubenswrapper[4766]: I0129 11:30:36.401794 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:36 crc kubenswrapper[4766]: I0129 11:30:36.402214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:36 crc kubenswrapper[4766]: I0129 11:30:36.439565 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:37 crc kubenswrapper[4766]: I0129 11:30:37.628608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gq4m" Jan 29 11:30:42 crc kubenswrapper[4766]: I0129 11:30:42.576619 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6zxp" Jan 29 11:30:45 crc kubenswrapper[4766]: I0129 11:30:43.925610 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:45 crc kubenswrapper[4766]: I0129 11:30:43.975606 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l554z" Jan 29 11:30:45 crc kubenswrapper[4766]: I0129 11:30:44.922240 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bf677" Jan 29 11:30:46 crc kubenswrapper[4766]: I0129 11:30:46.331007 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerName="oauth-openshift" containerID="cri-o://d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694" gracePeriod=15 Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.361697 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.398023 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bd64b567b-htrxt"] Jan 29 11:30:47 crc kubenswrapper[4766]: E0129 11:30:47.398236 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerName="oauth-openshift" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.398249 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerName="oauth-openshift" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.398332 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerName="oauth-openshift" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.398705 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.411010 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bd64b567b-htrxt"] Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.462115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.462179 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.462236 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463036 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463096 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4mrt\" (UniqueName: \"kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463126 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463511 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463536 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463558 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463594 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463609 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463647 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463706 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle\") pod \"a7315b30-c300-4afe-b798-de15fe9e9cc8\" (UID: \"a7315b30-c300-4afe-b798-de15fe9e9cc8\") " Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463873 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-error\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-audit-policies\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463941 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463956 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj67\" (UniqueName: \"kubernetes.io/projected/07d53195-1650-43e1-86b8-3704bd5cd660-kube-api-access-flj67\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463987 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464003 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07d53195-1650-43e1-86b8-3704bd5cd660-audit-dir\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464039 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-login\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464060 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-session\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464156 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.464170 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463704 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.463752 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.465347 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.469067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt" (OuterVolumeSpecName: "kube-api-access-m4mrt") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "kube-api-access-m4mrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.469726 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.470304 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.470720 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.473762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.475304 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.480900 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.481762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.482590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a7315b30-c300-4afe-b798-de15fe9e9cc8" (UID: "a7315b30-c300-4afe-b798-de15fe9e9cc8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj67\" (UniqueName: \"kubernetes.io/projected/07d53195-1650-43e1-86b8-3704bd5cd660-kube-api-access-flj67\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564860 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564907 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07d53195-1650-43e1-86b8-3704bd5cd660-audit-dir\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-login\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564965 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-session\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.564984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-error\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-audit-policies\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565153 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565167 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565180 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565227 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565241 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565256 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565270 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565282 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4mrt\" (UniqueName: \"kubernetes.io/projected/a7315b30-c300-4afe-b798-de15fe9e9cc8-kube-api-access-m4mrt\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565294 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a7315b30-c300-4afe-b798-de15fe9e9cc8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565306 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565317 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.565331 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7315b30-c300-4afe-b798-de15fe9e9cc8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.566044 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-audit-policies\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.566380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.566612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07d53195-1650-43e1-86b8-3704bd5cd660-audit-dir\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.566927 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.567639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.568530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-session\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.568704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.569308 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.569501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-error\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.570167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.572664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.577654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-user-template-login\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.578885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/07d53195-1650-43e1-86b8-3704bd5cd660-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.588656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj67\" (UniqueName: \"kubernetes.io/projected/07d53195-1650-43e1-86b8-3704bd5cd660-kube-api-access-flj67\") pod \"oauth-openshift-7bd64b567b-htrxt\" (UID: \"07d53195-1650-43e1-86b8-3704bd5cd660\") " pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.648557 4766 generic.go:334] "Generic (PLEG): container finished" podID="a7315b30-c300-4afe-b798-de15fe9e9cc8" containerID="d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694" exitCode=0 Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.648597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" event={"ID":"a7315b30-c300-4afe-b798-de15fe9e9cc8","Type":"ContainerDied","Data":"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694"} Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.648639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" event={"ID":"a7315b30-c300-4afe-b798-de15fe9e9cc8","Type":"ContainerDied","Data":"e5e0b62317a8747803c473fa1ee07f9e73b4bd6ed99dfbf5eeb903eef18d24be"} Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.648647 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r9vtz" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.648658 4766 scope.go:117] "RemoveContainer" containerID="d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.666486 4766 scope.go:117] "RemoveContainer" containerID="d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694" Jan 29 11:30:47 crc kubenswrapper[4766]: E0129 11:30:47.667954 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694\": container with ID starting with d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694 not found: ID does not exist" containerID="d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.668016 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694"} err="failed to get container status \"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694\": rpc error: code = NotFound desc = could not find container \"d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694\": container with ID starting with d4389e9066e461ff4d6e5b7fe6ed7ccb0123f9201135f5982aa271f7f3044694 not found: ID does not exist" Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.680984 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.684904 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r9vtz"] Jan 29 11:30:47 crc kubenswrapper[4766]: I0129 11:30:47.721370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:48 crc kubenswrapper[4766]: I0129 11:30:48.226624 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bd64b567b-htrxt"] Jan 29 11:30:48 crc kubenswrapper[4766]: I0129 11:30:48.655493 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" event={"ID":"07d53195-1650-43e1-86b8-3704bd5cd660","Type":"ContainerStarted","Data":"49c94a6c67fad5e4f37f413f2d4b53e9637c0eecb4e1bfd239138a2985cae56b"} Jan 29 11:30:49 crc kubenswrapper[4766]: I0129 11:30:49.231564 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7315b30-c300-4afe-b798-de15fe9e9cc8" path="/var/lib/kubelet/pods/a7315b30-c300-4afe-b798-de15fe9e9cc8/volumes" Jan 29 11:30:49 crc kubenswrapper[4766]: I0129 11:30:49.668048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" event={"ID":"07d53195-1650-43e1-86b8-3704bd5cd660","Type":"ContainerStarted","Data":"26501ac912412b5626f2b3a2814667b40496add722f5ac3392adbf2e8e3c24d3"} Jan 29 11:30:49 crc kubenswrapper[4766]: I0129 11:30:49.670993 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:49 crc kubenswrapper[4766]: I0129 11:30:49.676666 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" Jan 29 11:30:49 crc kubenswrapper[4766]: I0129 11:30:49.700336 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bd64b567b-htrxt" podStartSLOduration=28.70032136 podStartE2EDuration="28.70032136s" podCreationTimestamp="2026-01-29 11:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:30:49.69860467 +0000 UTC m=+586.810997691" watchObservedRunningTime="2026-01-29 11:30:49.70032136 +0000 UTC m=+586.812714371" Jan 29 11:31:16 crc kubenswrapper[4766]: I0129 11:31:16.362571 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:31:16 crc kubenswrapper[4766]: I0129 11:31:16.363067 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:31:46 crc kubenswrapper[4766]: I0129 11:31:46.361999 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:31:46 crc kubenswrapper[4766]: I0129 11:31:46.362829 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:32:16 crc kubenswrapper[4766]: I0129 11:32:16.362091 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:32:16 crc kubenswrapper[4766]: I0129 11:32:16.362649 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:32:16 crc kubenswrapper[4766]: I0129 11:32:16.362692 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:32:16 crc kubenswrapper[4766]: I0129 11:32:16.363248 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:32:16 crc kubenswrapper[4766]: I0129 11:32:16.363300 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210" gracePeriod=600 Jan 29 11:32:17 crc kubenswrapper[4766]: I0129 11:32:17.148746 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210" exitCode=0 Jan 29 11:32:17 crc kubenswrapper[4766]: I0129 11:32:17.149058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210"} Jan 29 11:32:17 crc kubenswrapper[4766]: I0129 11:32:17.149087 4766 scope.go:117] "RemoveContainer" containerID="fad51bc095d53b0b4e38951d803ca7e9fd8430c262fc7df79bdb27e585373f6f" Jan 29 11:32:18 crc kubenswrapper[4766]: I0129 11:32:18.157467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd"} Jan 29 11:34:04 crc kubenswrapper[4766]: I0129 11:34:04.707141 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.583580 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9cpg"] Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.584900 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.606200 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9cpg"] Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80153c6c-77d8-48e1-a351-25ea07b8298e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660368 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80153c6c-77d8-48e1-a351-25ea07b8298e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-certificates\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxcg\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-kube-api-access-hhxcg\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660575 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-tls\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-trusted-ca\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-bound-sa-token\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.660666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.687205 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-trusted-ca\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-bound-sa-token\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80153c6c-77d8-48e1-a351-25ea07b8298e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80153c6c-77d8-48e1-a351-25ea07b8298e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761588 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-certificates\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxcg\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-kube-api-access-hhxcg\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.761645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-tls\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.762360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80153c6c-77d8-48e1-a351-25ea07b8298e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.762873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-trusted-ca\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.763343 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-certificates\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.767797 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-registry-tls\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.767908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80153c6c-77d8-48e1-a351-25ea07b8298e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.777493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxcg\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-kube-api-access-hhxcg\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.778722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80153c6c-77d8-48e1-a351-25ea07b8298e-bound-sa-token\") pod \"image-registry-66df7c8f76-v9cpg\" (UID: \"80153c6c-77d8-48e1-a351-25ea07b8298e\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:25 crc kubenswrapper[4766]: I0129 11:34:25.902027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:26 crc kubenswrapper[4766]: I0129 11:34:26.294443 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9cpg"] Jan 29 11:34:26 crc kubenswrapper[4766]: I0129 11:34:26.806340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" event={"ID":"80153c6c-77d8-48e1-a351-25ea07b8298e","Type":"ContainerStarted","Data":"f3769e4bd791c40658e9e9a6116099eba400bd33cfe1d0c8567e7ce3f7b8763d"} Jan 29 11:34:26 crc kubenswrapper[4766]: I0129 11:34:26.806806 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:26 crc kubenswrapper[4766]: I0129 11:34:26.806827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" event={"ID":"80153c6c-77d8-48e1-a351-25ea07b8298e","Type":"ContainerStarted","Data":"895f18bf7980902a90ab26aca0ac4246a09ef9ab06d878a6d13d3943354c969b"} Jan 29 11:34:45 crc kubenswrapper[4766]: I0129 11:34:45.910562 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" Jan 29 11:34:45 crc kubenswrapper[4766]: I0129 11:34:45.940189 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-v9cpg" podStartSLOduration=20.940173004000002 podStartE2EDuration="20.940173004s" podCreationTimestamp="2026-01-29 11:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:34:26.828404947 +0000 UTC m=+803.940797968" watchObservedRunningTime="2026-01-29 11:34:45.940173004 +0000 UTC m=+823.052566015" Jan 29 11:34:45 crc kubenswrapper[4766]: I0129 11:34:45.971736 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:34:46 crc kubenswrapper[4766]: I0129 11:34:46.362210 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:34:46 crc kubenswrapper[4766]: I0129 11:34:46.362606 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.011223 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" containerName="registry" containerID="cri-o://598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1" gracePeriod=30 Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.335593 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440081 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpf9\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440281 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.440375 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets\") pod \"bf694c5f-16c8-4b89-9b66-976601ada400\" (UID: \"bf694c5f-16c8-4b89-9b66-976601ada400\") " Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.441500 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.442026 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.446348 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9" (OuterVolumeSpecName: "kube-api-access-gdpf9") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "kube-api-access-gdpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.446594 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.446664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.446782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.449574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.457630 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bf694c5f-16c8-4b89-9b66-976601ada400" (UID: "bf694c5f-16c8-4b89-9b66-976601ada400"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542840 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpf9\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-kube-api-access-gdpf9\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542870 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542879 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542887 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf694c5f-16c8-4b89-9b66-976601ada400-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542897 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf694c5f-16c8-4b89-9b66-976601ada400-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542905 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf694c5f-16c8-4b89-9b66-976601ada400-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:11 crc kubenswrapper[4766]: I0129 11:35:11.542913 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf694c5f-16c8-4b89-9b66-976601ada400-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.023300 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf694c5f-16c8-4b89-9b66-976601ada400" containerID="598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1" exitCode=0 Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.023356 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.023370 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" event={"ID":"bf694c5f-16c8-4b89-9b66-976601ada400","Type":"ContainerDied","Data":"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1"} Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.023714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" event={"ID":"bf694c5f-16c8-4b89-9b66-976601ada400","Type":"ContainerDied","Data":"f959a8e93b87215b4953d2b0c086ba7a58474e31a38903ccb9961ece033a7fe0"} Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.023739 4766 scope.go:117] "RemoveContainer" containerID="598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1" Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.053497 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.053495 4766 scope.go:117] "RemoveContainer" containerID="598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1" Jan 29 11:35:12 crc kubenswrapper[4766]: E0129 11:35:12.054129 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1\": container with ID starting with 598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1 not found: ID does not exist" containerID="598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1" Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.054170 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1"} err="failed to get container status \"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1\": rpc error: code = NotFound desc = could not find container \"598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1\": container with ID starting with 598200f718536bddff51f4c12b6bcc642d29918b8b50b730793d828e4327eee1 not found: ID does not exist" Jan 29 11:35:12 crc kubenswrapper[4766]: I0129 11:35:12.059966 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6xbql"] Jan 29 11:35:13 crc kubenswrapper[4766]: I0129 11:35:13.234134 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" path="/var/lib/kubelet/pods/bf694c5f-16c8-4b89-9b66-976601ada400/volumes" Jan 29 11:35:16 crc kubenswrapper[4766]: I0129 11:35:16.288140 4766 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-6xbql container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.33:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:35:16 crc kubenswrapper[4766]: I0129 11:35:16.288529 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-6xbql" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.33:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:35:16 crc kubenswrapper[4766]: I0129 11:35:16.362021 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:16 crc kubenswrapper[4766]: I0129 11:35:16.362105 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:46 crc kubenswrapper[4766]: I0129 11:35:46.361784 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:46 crc kubenswrapper[4766]: I0129 11:35:46.362336 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:46 crc kubenswrapper[4766]: I0129 11:35:46.362385 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:35:46 crc kubenswrapper[4766]: I0129 11:35:46.362937 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:35:46 crc kubenswrapper[4766]: I0129 11:35:46.362997 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd" gracePeriod=600 Jan 29 11:35:47 crc kubenswrapper[4766]: I0129 11:35:47.215392 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd" exitCode=0 Jan 29 11:35:47 crc kubenswrapper[4766]: I0129 11:35:47.215443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd"} Jan 29 11:35:47 crc kubenswrapper[4766]: I0129 11:35:47.215759 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8"} Jan 29 11:35:47 crc kubenswrapper[4766]: I0129 11:35:47.215778 4766 scope.go:117] "RemoveContainer" containerID="289b46d81663eab98ebc9c1c1ff871931cb149c2d0ce77c14017931a9f7bb210" Jan 29 11:38:16 crc kubenswrapper[4766]: I0129 11:38:16.361680 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:16 crc kubenswrapper[4766]: I0129 11:38:16.362277 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.643767 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:38:42 crc kubenswrapper[4766]: E0129 11:38:42.644587 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" containerName="registry" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.644607 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" containerName="registry" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.644784 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf694c5f-16c8-4b89-9b66-976601ada400" containerName="registry" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.645838 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.653850 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.834806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.834965 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6scm\" (UniqueName: \"kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.835081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.935913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6scm\" (UniqueName: \"kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.936754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.936986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.937472 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.937515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.956738 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6scm\" (UniqueName: \"kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm\") pod \"redhat-operators-rt6pn\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:42 crc kubenswrapper[4766]: I0129 11:38:42.971731 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:43 crc kubenswrapper[4766]: I0129 11:38:43.366381 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:38:44 crc kubenswrapper[4766]: I0129 11:38:44.142636 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerID="f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47" exitCode=0 Jan 29 11:38:44 crc kubenswrapper[4766]: I0129 11:38:44.142733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerDied","Data":"f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47"} Jan 29 11:38:44 crc kubenswrapper[4766]: I0129 11:38:44.142937 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerStarted","Data":"444e6dbe3917a066a8ab644f3f7bb29bcca2af87813a697fd7139b28191581e9"} Jan 29 11:38:44 crc kubenswrapper[4766]: I0129 11:38:44.145686 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:38:45 crc kubenswrapper[4766]: I0129 11:38:45.149341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerStarted","Data":"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4"} Jan 29 11:38:46 crc kubenswrapper[4766]: I0129 11:38:46.156245 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerID="ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4" exitCode=0 Jan 29 11:38:46 crc kubenswrapper[4766]: I0129 11:38:46.156328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerDied","Data":"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4"} Jan 29 11:38:46 crc kubenswrapper[4766]: I0129 11:38:46.362148 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:46 crc kubenswrapper[4766]: I0129 11:38:46.362209 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:38:48 crc kubenswrapper[4766]: I0129 11:38:48.175700 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerStarted","Data":"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816"} Jan 29 11:38:48 crc kubenswrapper[4766]: I0129 11:38:48.198140 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rt6pn" podStartSLOduration=3.077010493 podStartE2EDuration="6.198113121s" podCreationTimestamp="2026-01-29 11:38:42 +0000 UTC" firstStartedPulling="2026-01-29 11:38:44.145432353 +0000 UTC m=+1061.257825364" lastFinishedPulling="2026-01-29 11:38:47.266534981 +0000 UTC m=+1064.378927992" observedRunningTime="2026-01-29 11:38:48.191964975 +0000 UTC m=+1065.304358006" watchObservedRunningTime="2026-01-29 11:38:48.198113121 +0000 UTC m=+1065.310506132" Jan 29 11:38:52 crc kubenswrapper[4766]: I0129 11:38:52.973749 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:52 crc kubenswrapper[4766]: I0129 11:38:52.974147 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:38:54 crc kubenswrapper[4766]: I0129 11:38:54.016089 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rt6pn" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:38:54 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 11:38:54 crc kubenswrapper[4766]: > Jan 29 11:39:01 crc kubenswrapper[4766]: I0129 11:39:01.896990 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:01 crc kubenswrapper[4766]: I0129 11:39:01.898899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:01 crc kubenswrapper[4766]: I0129 11:39:01.908506 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.078242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln9x\" (UniqueName: \"kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.078295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.078322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.179213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.179274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.179332 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln9x\" (UniqueName: \"kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.180189 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.180232 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.198245 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln9x\" (UniqueName: \"kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x\") pod \"certified-operators-zqv4f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.254545 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:02 crc kubenswrapper[4766]: I0129 11:39:02.509674 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:02 crc kubenswrapper[4766]: W0129 11:39:02.520640 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d65787f_1649_4fde_a97e_7efb8a7ab17f.slice/crio-b1898c81d7bc7c23047d75e2b78bbb06e1f60705d56101ef0a92287443138207 WatchSource:0}: Error finding container b1898c81d7bc7c23047d75e2b78bbb06e1f60705d56101ef0a92287443138207: Status 404 returned error can't find the container with id b1898c81d7bc7c23047d75e2b78bbb06e1f60705d56101ef0a92287443138207 Jan 29 11:39:03 crc kubenswrapper[4766]: I0129 11:39:03.008882 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:39:03 crc kubenswrapper[4766]: I0129 11:39:03.048763 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:39:03 crc kubenswrapper[4766]: I0129 11:39:03.249640 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerID="04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b" exitCode=0 Jan 29 11:39:03 crc kubenswrapper[4766]: I0129 11:39:03.249701 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerDied","Data":"04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b"} Jan 29 11:39:03 crc kubenswrapper[4766]: I0129 11:39:03.249758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerStarted","Data":"b1898c81d7bc7c23047d75e2b78bbb06e1f60705d56101ef0a92287443138207"} Jan 29 11:39:04 crc kubenswrapper[4766]: I0129 11:39:04.258706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerStarted","Data":"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19"} Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.265518 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerID="4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19" exitCode=0 Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.265576 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerDied","Data":"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19"} Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.274507 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.274726 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rt6pn" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="registry-server" containerID="cri-o://61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816" gracePeriod=2 Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.644869 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.826094 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content\") pod \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.826368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities\") pod \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.826428 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6scm\" (UniqueName: \"kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm\") pod \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\" (UID: \"cfb2cb8f-73b5-469f-a9b7-e21468138ac3\") " Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.827311 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities" (OuterVolumeSpecName: "utilities") pod "cfb2cb8f-73b5-469f-a9b7-e21468138ac3" (UID: "cfb2cb8f-73b5-469f-a9b7-e21468138ac3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.833822 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm" (OuterVolumeSpecName: "kube-api-access-f6scm") pod "cfb2cb8f-73b5-469f-a9b7-e21468138ac3" (UID: "cfb2cb8f-73b5-469f-a9b7-e21468138ac3"). InnerVolumeSpecName "kube-api-access-f6scm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.928326 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.928375 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6scm\" (UniqueName: \"kubernetes.io/projected/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-kube-api-access-f6scm\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:05 crc kubenswrapper[4766]: I0129 11:39:05.950148 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfb2cb8f-73b5-469f-a9b7-e21468138ac3" (UID: "cfb2cb8f-73b5-469f-a9b7-e21468138ac3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.029849 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb2cb8f-73b5-469f-a9b7-e21468138ac3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.282807 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerStarted","Data":"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e"} Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.293011 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerID="61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816" exitCode=0 Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.293063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerDied","Data":"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816"} Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.293092 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt6pn" event={"ID":"cfb2cb8f-73b5-469f-a9b7-e21468138ac3","Type":"ContainerDied","Data":"444e6dbe3917a066a8ab644f3f7bb29bcca2af87813a697fd7139b28191581e9"} Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.293113 4766 scope.go:117] "RemoveContainer" containerID="61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.293134 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt6pn" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.313621 4766 scope.go:117] "RemoveContainer" containerID="ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.322274 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zqv4f" podStartSLOduration=2.8247826050000002 podStartE2EDuration="5.322256056s" podCreationTimestamp="2026-01-29 11:39:01 +0000 UTC" firstStartedPulling="2026-01-29 11:39:03.251489578 +0000 UTC m=+1080.363882589" lastFinishedPulling="2026-01-29 11:39:05.748963029 +0000 UTC m=+1082.861356040" observedRunningTime="2026-01-29 11:39:06.318986913 +0000 UTC m=+1083.431379924" watchObservedRunningTime="2026-01-29 11:39:06.322256056 +0000 UTC m=+1083.434649067" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.345050 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.347396 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rt6pn"] Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.355322 4766 scope.go:117] "RemoveContainer" containerID="f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.375765 4766 scope.go:117] "RemoveContainer" containerID="61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816" Jan 29 11:39:06 crc kubenswrapper[4766]: E0129 11:39:06.375983 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816\": container with ID starting with 61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816 not found: ID does not exist" containerID="61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.376014 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816"} err="failed to get container status \"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816\": rpc error: code = NotFound desc = could not find container \"61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816\": container with ID starting with 61ee2ae41748f41ff455ff12169cc6bdc3e1a3d61b4fbc61a777baafe3adf816 not found: ID does not exist" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.376037 4766 scope.go:117] "RemoveContainer" containerID="ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4" Jan 29 11:39:06 crc kubenswrapper[4766]: E0129 11:39:06.379627 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4\": container with ID starting with ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4 not found: ID does not exist" containerID="ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.379676 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4"} err="failed to get container status \"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4\": rpc error: code = NotFound desc = could not find container \"ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4\": container with ID starting with ce32425506381382a856e7da9029202e6b75238b566727b513fb4a57eac8e7f4 not found: ID does not exist" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.379710 4766 scope.go:117] "RemoveContainer" containerID="f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47" Jan 29 11:39:06 crc kubenswrapper[4766]: E0129 11:39:06.380867 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47\": container with ID starting with f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47 not found: ID does not exist" containerID="f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47" Jan 29 11:39:06 crc kubenswrapper[4766]: I0129 11:39:06.380895 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47"} err="failed to get container status \"f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47\": rpc error: code = NotFound desc = could not find container \"f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47\": container with ID starting with f220c2f7b2acd8c8ac8521dfcbb2dcb5bfe7d90b2b3eb12f7c509f1777179b47 not found: ID does not exist" Jan 29 11:39:07 crc kubenswrapper[4766]: I0129 11:39:07.231249 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" path="/var/lib/kubelet/pods/cfb2cb8f-73b5-469f-a9b7-e21468138ac3/volumes" Jan 29 11:39:12 crc kubenswrapper[4766]: I0129 11:39:12.254757 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:12 crc kubenswrapper[4766]: I0129 11:39:12.255294 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:12 crc kubenswrapper[4766]: I0129 11:39:12.308697 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:12 crc kubenswrapper[4766]: I0129 11:39:12.370723 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:12 crc kubenswrapper[4766]: I0129 11:39:12.541304 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.338073 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zqv4f" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="registry-server" containerID="cri-o://dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e" gracePeriod=2 Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.694148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.836710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities\") pod \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.836775 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content\") pod \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.836859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln9x\" (UniqueName: \"kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x\") pod \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\" (UID: \"3d65787f-1649-4fde-a97e-7efb8a7ab17f\") " Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.837673 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities" (OuterVolumeSpecName: "utilities") pod "3d65787f-1649-4fde-a97e-7efb8a7ab17f" (UID: "3d65787f-1649-4fde-a97e-7efb8a7ab17f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.841805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x" (OuterVolumeSpecName: "kube-api-access-sln9x") pod "3d65787f-1649-4fde-a97e-7efb8a7ab17f" (UID: "3d65787f-1649-4fde-a97e-7efb8a7ab17f"). InnerVolumeSpecName "kube-api-access-sln9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.938429 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:14 crc kubenswrapper[4766]: I0129 11:39:14.938836 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sln9x\" (UniqueName: \"kubernetes.io/projected/3d65787f-1649-4fde-a97e-7efb8a7ab17f-kube-api-access-sln9x\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.347487 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqv4f" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.347540 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerID="dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e" exitCode=0 Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.347548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerDied","Data":"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e"} Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.347625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqv4f" event={"ID":"3d65787f-1649-4fde-a97e-7efb8a7ab17f","Type":"ContainerDied","Data":"b1898c81d7bc7c23047d75e2b78bbb06e1f60705d56101ef0a92287443138207"} Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.347643 4766 scope.go:117] "RemoveContainer" containerID="dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.363069 4766 scope.go:117] "RemoveContainer" containerID="4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.378263 4766 scope.go:117] "RemoveContainer" containerID="04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.414156 4766 scope.go:117] "RemoveContainer" containerID="dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e" Jan 29 11:39:15 crc kubenswrapper[4766]: E0129 11:39:15.414616 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e\": container with ID starting with dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e not found: ID does not exist" containerID="dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.414651 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e"} err="failed to get container status \"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e\": rpc error: code = NotFound desc = could not find container \"dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e\": container with ID starting with dce16ad786383773aec82eea9e06b9ae607548843229ea0a4eac6910e978da5e not found: ID does not exist" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.414676 4766 scope.go:117] "RemoveContainer" containerID="4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19" Jan 29 11:39:15 crc kubenswrapper[4766]: E0129 11:39:15.415004 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19\": container with ID starting with 4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19 not found: ID does not exist" containerID="4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.415030 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19"} err="failed to get container status \"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19\": rpc error: code = NotFound desc = could not find container \"4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19\": container with ID starting with 4e2ca3211bc536fb594b6f0bcb5a76c9463e364c9a14168602039e2c2f8c1f19 not found: ID does not exist" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.415045 4766 scope.go:117] "RemoveContainer" containerID="04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b" Jan 29 11:39:15 crc kubenswrapper[4766]: E0129 11:39:15.415321 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b\": container with ID starting with 04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b not found: ID does not exist" containerID="04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b" Jan 29 11:39:15 crc kubenswrapper[4766]: I0129 11:39:15.415349 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b"} err="failed to get container status \"04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b\": rpc error: code = NotFound desc = could not find container \"04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b\": container with ID starting with 04ecaffa78e15cde6e0b8d67c3b7a9d8c6f500f90d0dd647001f3dbfd7a2b45b not found: ID does not exist" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.203059 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d65787f-1649-4fde-a97e-7efb8a7ab17f" (UID: "3d65787f-1649-4fde-a97e-7efb8a7ab17f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.254333 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d65787f-1649-4fde-a97e-7efb8a7ab17f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.299683 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.303081 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zqv4f"] Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.362472 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.362524 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.362560 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.362974 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:39:16 crc kubenswrapper[4766]: I0129 11:39:16.363018 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8" gracePeriod=600 Jan 29 11:39:17 crc kubenswrapper[4766]: I0129 11:39:17.230798 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" path="/var/lib/kubelet/pods/3d65787f-1649-4fde-a97e-7efb8a7ab17f/volumes" Jan 29 11:39:17 crc kubenswrapper[4766]: I0129 11:39:17.363838 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8" exitCode=0 Jan 29 11:39:17 crc kubenswrapper[4766]: I0129 11:39:17.363881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8"} Jan 29 11:39:17 crc kubenswrapper[4766]: I0129 11:39:17.363905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370"} Jan 29 11:39:17 crc kubenswrapper[4766]: I0129 11:39:17.363922 4766 scope.go:117] "RemoveContainer" containerID="7fa4d042a6c05c408b2d3dedbac93c3fc30503468a0d3531b823b069420802bd" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992113 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992702 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992719 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992734 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="extract-utilities" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992741 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="extract-utilities" Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992760 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="extract-utilities" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992768 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="extract-utilities" Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992780 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="extract-content" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="extract-content" Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992803 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: E0129 11:39:21.992821 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="extract-content" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992828 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="extract-content" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992940 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d65787f-1649-4fde-a97e-7efb8a7ab17f" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.992955 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb2cb8f-73b5-469f-a9b7-e21468138ac3" containerName="registry-server" Jan 29 11:39:21 crc kubenswrapper[4766]: I0129 11:39:21.993720 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.011631 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.031739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.031822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.031861 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqh8\" (UniqueName: \"kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.132976 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.133051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.133080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqh8\" (UniqueName: \"kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.133743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.133885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.162021 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqh8\" (UniqueName: \"kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8\") pod \"community-operators-gfnfq\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.311302 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.627035 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.664388 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-h927r"] Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.665600 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.668547 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.668691 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.668732 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-4wdwv" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.669126 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.675536 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h927r"] Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.742383 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flp9c\" (UniqueName: \"kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.742470 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.742533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.843548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flp9c\" (UniqueName: \"kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.843630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.843699 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.844224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.844630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:22 crc kubenswrapper[4766]: I0129 11:39:22.864532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flp9c\" (UniqueName: \"kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c\") pod \"crc-storage-crc-h927r\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:23 crc kubenswrapper[4766]: I0129 11:39:23.014746 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:23 crc kubenswrapper[4766]: I0129 11:39:23.402750 4766 generic.go:334] "Generic (PLEG): container finished" podID="855e0781-24ab-4311-897f-015e31830df3" containerID="ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443" exitCode=0 Jan 29 11:39:23 crc kubenswrapper[4766]: I0129 11:39:23.402843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerDied","Data":"ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443"} Jan 29 11:39:23 crc kubenswrapper[4766]: I0129 11:39:23.403110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerStarted","Data":"d918f7b1cc13d5a3d5dbaf9c40b8b71159820b1a9b016bb320b1c288cc2706dc"} Jan 29 11:39:23 crc kubenswrapper[4766]: I0129 11:39:23.458665 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h927r"] Jan 29 11:39:24 crc kubenswrapper[4766]: I0129 11:39:24.408174 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h927r" event={"ID":"fd288d6c-57bd-447a-b4cb-184164ea59e6","Type":"ContainerStarted","Data":"14ab2e058bca6c193a60b169029723160504ef0f08a422f156d16afc427b1298"} Jan 29 11:39:24 crc kubenswrapper[4766]: I0129 11:39:24.409579 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerStarted","Data":"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e"} Jan 29 11:39:25 crc kubenswrapper[4766]: I0129 11:39:25.416955 4766 generic.go:334] "Generic (PLEG): container finished" podID="855e0781-24ab-4311-897f-015e31830df3" containerID="965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e" exitCode=0 Jan 29 11:39:25 crc kubenswrapper[4766]: I0129 11:39:25.417047 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerDied","Data":"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e"} Jan 29 11:39:25 crc kubenswrapper[4766]: I0129 11:39:25.419440 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd288d6c-57bd-447a-b4cb-184164ea59e6" containerID="285e39c01f2f9794565d8f70a2a8948cb0f36b16f69c18c6f3bd180a9a8287f1" exitCode=0 Jan 29 11:39:25 crc kubenswrapper[4766]: I0129 11:39:25.419486 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h927r" event={"ID":"fd288d6c-57bd-447a-b4cb-184164ea59e6","Type":"ContainerDied","Data":"285e39c01f2f9794565d8f70a2a8948cb0f36b16f69c18c6f3bd180a9a8287f1"} Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.426695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerStarted","Data":"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e"} Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.451493 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gfnfq" podStartSLOduration=3.022017695 podStartE2EDuration="5.451473759s" podCreationTimestamp="2026-01-29 11:39:21 +0000 UTC" firstStartedPulling="2026-01-29 11:39:23.404501778 +0000 UTC m=+1100.516894789" lastFinishedPulling="2026-01-29 11:39:25.833957832 +0000 UTC m=+1102.946350853" observedRunningTime="2026-01-29 11:39:26.448429903 +0000 UTC m=+1103.560822924" watchObservedRunningTime="2026-01-29 11:39:26.451473759 +0000 UTC m=+1103.563866780" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.625668 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.799789 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage\") pod \"fd288d6c-57bd-447a-b4cb-184164ea59e6\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.799949 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flp9c\" (UniqueName: \"kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c\") pod \"fd288d6c-57bd-447a-b4cb-184164ea59e6\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.799985 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt\") pod \"fd288d6c-57bd-447a-b4cb-184164ea59e6\" (UID: \"fd288d6c-57bd-447a-b4cb-184164ea59e6\") " Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.800115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "fd288d6c-57bd-447a-b4cb-184164ea59e6" (UID: "fd288d6c-57bd-447a-b4cb-184164ea59e6"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.800240 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fd288d6c-57bd-447a-b4cb-184164ea59e6-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.808800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c" (OuterVolumeSpecName: "kube-api-access-flp9c") pod "fd288d6c-57bd-447a-b4cb-184164ea59e6" (UID: "fd288d6c-57bd-447a-b4cb-184164ea59e6"). InnerVolumeSpecName "kube-api-access-flp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.815982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "fd288d6c-57bd-447a-b4cb-184164ea59e6" (UID: "fd288d6c-57bd-447a-b4cb-184164ea59e6"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.901706 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flp9c\" (UniqueName: \"kubernetes.io/projected/fd288d6c-57bd-447a-b4cb-184164ea59e6-kube-api-access-flp9c\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:26 crc kubenswrapper[4766]: I0129 11:39:26.901748 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fd288d6c-57bd-447a-b4cb-184164ea59e6-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:27 crc kubenswrapper[4766]: I0129 11:39:27.464549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h927r" event={"ID":"fd288d6c-57bd-447a-b4cb-184164ea59e6","Type":"ContainerDied","Data":"14ab2e058bca6c193a60b169029723160504ef0f08a422f156d16afc427b1298"} Jan 29 11:39:27 crc kubenswrapper[4766]: I0129 11:39:27.464614 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ab2e058bca6c193a60b169029723160504ef0f08a422f156d16afc427b1298" Jan 29 11:39:27 crc kubenswrapper[4766]: I0129 11:39:27.464563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h927r" Jan 29 11:39:32 crc kubenswrapper[4766]: I0129 11:39:32.312231 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:32 crc kubenswrapper[4766]: I0129 11:39:32.312550 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:32 crc kubenswrapper[4766]: I0129 11:39:32.366631 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:32 crc kubenswrapper[4766]: I0129 11:39:32.544940 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.951739 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6"] Jan 29 11:39:33 crc kubenswrapper[4766]: E0129 11:39:33.952427 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd288d6c-57bd-447a-b4cb-184164ea59e6" containerName="storage" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.952440 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd288d6c-57bd-447a-b4cb-184164ea59e6" containerName="storage" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.952647 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd288d6c-57bd-447a-b4cb-184164ea59e6" containerName="storage" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.954051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.959325 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:39:33 crc kubenswrapper[4766]: I0129 11:39:33.973539 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6"] Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.090792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2sd\" (UniqueName: \"kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.090930 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.090964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.100342 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.191871 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.192266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.192403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2sd\" (UniqueName: \"kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.192640 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.192720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.210525 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2sd\" (UniqueName: \"kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.275928 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.468693 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6"] Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.507214 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" event={"ID":"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6","Type":"ContainerStarted","Data":"853aab96fd4e09d9cadc835712d4857947fb7f0288b605c08191e538b7e30336"} Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.507421 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gfnfq" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="registry-server" containerID="cri-o://495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e" gracePeriod=2 Jan 29 11:39:34 crc kubenswrapper[4766]: I0129 11:39:34.800306 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.001280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content\") pod \"855e0781-24ab-4311-897f-015e31830df3\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.001380 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqh8\" (UniqueName: \"kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8\") pod \"855e0781-24ab-4311-897f-015e31830df3\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.001519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities\") pod \"855e0781-24ab-4311-897f-015e31830df3\" (UID: \"855e0781-24ab-4311-897f-015e31830df3\") " Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.002495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities" (OuterVolumeSpecName: "utilities") pod "855e0781-24ab-4311-897f-015e31830df3" (UID: "855e0781-24ab-4311-897f-015e31830df3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.006308 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8" (OuterVolumeSpecName: "kube-api-access-vqqh8") pod "855e0781-24ab-4311-897f-015e31830df3" (UID: "855e0781-24ab-4311-897f-015e31830df3"). InnerVolumeSpecName "kube-api-access-vqqh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.057914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "855e0781-24ab-4311-897f-015e31830df3" (UID: "855e0781-24ab-4311-897f-015e31830df3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.102776 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.102813 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqh8\" (UniqueName: \"kubernetes.io/projected/855e0781-24ab-4311-897f-015e31830df3-kube-api-access-vqqh8\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.102826 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855e0781-24ab-4311-897f-015e31830df3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.515817 4766 generic.go:334] "Generic (PLEG): container finished" podID="855e0781-24ab-4311-897f-015e31830df3" containerID="495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e" exitCode=0 Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.515886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerDied","Data":"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e"} Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.515885 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfnfq" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.515913 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfnfq" event={"ID":"855e0781-24ab-4311-897f-015e31830df3","Type":"ContainerDied","Data":"d918f7b1cc13d5a3d5dbaf9c40b8b71159820b1a9b016bb320b1c288cc2706dc"} Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.515930 4766 scope.go:117] "RemoveContainer" containerID="495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.517291 4766 generic.go:334] "Generic (PLEG): container finished" podID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerID="dddcc6b1391a1494e0160c65e98a83775a090edb2af05fc76f168f16e838fd8e" exitCode=0 Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.517613 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" event={"ID":"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6","Type":"ContainerDied","Data":"dddcc6b1391a1494e0160c65e98a83775a090edb2af05fc76f168f16e838fd8e"} Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.533789 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.534931 4766 scope.go:117] "RemoveContainer" containerID="965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.540669 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gfnfq"] Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.552501 4766 scope.go:117] "RemoveContainer" containerID="ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.570343 4766 scope.go:117] "RemoveContainer" containerID="495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e" Jan 29 11:39:35 crc kubenswrapper[4766]: E0129 11:39:35.570842 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e\": container with ID starting with 495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e not found: ID does not exist" containerID="495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.570877 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e"} err="failed to get container status \"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e\": rpc error: code = NotFound desc = could not find container \"495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e\": container with ID starting with 495be3cfa22186516364f6db332a34c5edea7b3b5fc70456f476717c6faa052e not found: ID does not exist" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.570901 4766 scope.go:117] "RemoveContainer" containerID="965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e" Jan 29 11:39:35 crc kubenswrapper[4766]: E0129 11:39:35.571304 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e\": container with ID starting with 965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e not found: ID does not exist" containerID="965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.571393 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e"} err="failed to get container status \"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e\": rpc error: code = NotFound desc = could not find container \"965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e\": container with ID starting with 965eb94a1d11d4b5aa7e3b638bce9c47d614de2b9f94a0982e1693a20aea977e not found: ID does not exist" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.571495 4766 scope.go:117] "RemoveContainer" containerID="ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443" Jan 29 11:39:35 crc kubenswrapper[4766]: E0129 11:39:35.571786 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443\": container with ID starting with ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443 not found: ID does not exist" containerID="ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443" Jan 29 11:39:35 crc kubenswrapper[4766]: I0129 11:39:35.571817 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443"} err="failed to get container status \"ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443\": rpc error: code = NotFound desc = could not find container \"ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443\": container with ID starting with ddc701e5367c9a2352ea51559850c3cce8a9c5bc93242fc6dcd3cc5da4310443 not found: ID does not exist" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.232913 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855e0781-24ab-4311-897f-015e31830df3" path="/var/lib/kubelet/pods/855e0781-24ab-4311-897f-015e31830df3/volumes" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.310389 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:37 crc kubenswrapper[4766]: E0129 11:39:37.312069 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="registry-server" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.312101 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="registry-server" Jan 29 11:39:37 crc kubenswrapper[4766]: E0129 11:39:37.312112 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="extract-content" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.312119 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="extract-content" Jan 29 11:39:37 crc kubenswrapper[4766]: E0129 11:39:37.312135 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="extract-utilities" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.312143 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="extract-utilities" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.312245 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="855e0781-24ab-4311-897f-015e31830df3" containerName="registry-server" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.312959 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.320890 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.447472 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.447541 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wjw\" (UniqueName: \"kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.447575 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.529666 4766 generic.go:334] "Generic (PLEG): container finished" podID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerID="480b01b73ad54cda3440e5fd429261fc48155d943c7fb5bc0b1f4f2cb331ecc6" exitCode=0 Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.529725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" event={"ID":"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6","Type":"ContainerDied","Data":"480b01b73ad54cda3440e5fd429261fc48155d943c7fb5bc0b1f4f2cb331ecc6"} Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.548770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.548833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wjw\" (UniqueName: \"kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.548863 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.549288 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.549357 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.575493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wjw\" (UniqueName: \"kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw\") pod \"redhat-marketplace-6qbw8\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.655834 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:37 crc kubenswrapper[4766]: I0129 11:39:37.861026 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.538147 4766 generic.go:334] "Generic (PLEG): container finished" podID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerID="f47aaafaf4d4f4af0f52f54ff89a8f601551d426e203ecce48c5282e7541d112" exitCode=0 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.538251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" event={"ID":"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6","Type":"ContainerDied","Data":"f47aaafaf4d4f4af0f52f54ff89a8f601551d426e203ecce48c5282e7541d112"} Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.540673 4766 generic.go:334] "Generic (PLEG): container finished" podID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerID="afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb" exitCode=0 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.540716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerDied","Data":"afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb"} Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.540762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerStarted","Data":"ebbb1540b210a852fcf73884b61553a7cab7a8365a73e551c563d34a4741fed1"} Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.647763 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zn4kn"] Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648139 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-controller" containerID="cri-o://84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648206 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="nbdb" containerID="cri-o://c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648245 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-acl-logging" containerID="cri-o://57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648304 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-node" containerID="cri-o://4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648328 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="sbdb" containerID="cri-o://402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648401 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="northd" containerID="cri-o://815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.648466 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.686641 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" containerID="cri-o://012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" gracePeriod=30 Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.942012 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/3.log" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.944377 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovn-acl-logging/0.log" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.944906 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovn-controller/0.log" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.945379 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999506 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjgsp"] Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999772 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="sbdb" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="sbdb" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999801 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999811 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999837 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999847 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-acl-logging" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999855 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-acl-logging" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999865 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="nbdb" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999872 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="nbdb" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999886 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999894 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999905 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kubecfg-setup" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999913 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kubecfg-setup" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999923 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999932 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:38 crc kubenswrapper[4766]: E0129 11:39:38.999943 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="northd" Jan 29 11:39:38 crc kubenswrapper[4766]: I0129 11:39:38.999951 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="northd" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:38.999965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:38.999974 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:38.999985 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-node" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:38.999993 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-node" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000105 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000117 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="northd" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000128 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000141 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000154 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="nbdb" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000164 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-acl-logging" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000176 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovn-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000187 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="sbdb" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000201 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="kube-rbac-proxy-node" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000210 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.000325 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000336 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.000348 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000356 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000481 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.000493 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerName="ovnkube-controller" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.002538 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069834 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log" (OuterVolumeSpecName: "node-log") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069974 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.069999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070062 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xk98\" (UniqueName: \"kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070112 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070200 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070233 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070251 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070271 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070316 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070380 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd\") pod \"98622e63-ce1a-413d-8a0a-32610d52ab94\" (UID: \"98622e63-ce1a-413d-8a0a-32610d52ab94\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070640 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070659 4766 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070669 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.070680 4766 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071220 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071262 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071393 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket" (OuterVolumeSpecName: "log-socket") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071486 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash" (OuterVolumeSpecName: "host-slash") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071581 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.071747 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.077135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.077269 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98" (OuterVolumeSpecName: "kube-api-access-8xk98") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "kube-api-access-8xk98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.084385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "98622e63-ce1a-413d-8a0a-32610d52ab94" (UID: "98622e63-ce1a-413d-8a0a-32610d52ab94"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.171659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.171954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-netns\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48vf\" (UniqueName: \"kubernetes.io/projected/343894b3-5fe5-47dc-939d-c818175ef385-kube-api-access-h48vf\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-systemd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-log-socket\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172266 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-node-log\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-systemd-units\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-kubelet\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172658 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/343894b3-5fe5-47dc-939d-c818175ef385-ovn-node-metrics-cert\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-config\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-etc-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-var-lib-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.172942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-slash\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-bin\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173165 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-netd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-env-overrides\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173334 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173434 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-ovn\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-script-lib\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173662 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173735 4766 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173791 4766 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173845 4766 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173903 4766 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.173954 4766 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174002 4766 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174052 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174106 4766 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174156 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98622e63-ce1a-413d-8a0a-32610d52ab94-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174210 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174259 4766 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174312 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174361 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98622e63-ce1a-413d-8a0a-32610d52ab94-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174428 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xk98\" (UniqueName: \"kubernetes.io/projected/98622e63-ce1a-413d-8a0a-32610d52ab94-kube-api-access-8xk98\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.174492 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98622e63-ce1a-413d-8a0a-32610d52ab94-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275489 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-script-lib\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275544 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275577 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-netns\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48vf\" (UniqueName: \"kubernetes.io/projected/343894b3-5fe5-47dc-939d-c818175ef385-kube-api-access-h48vf\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-systemd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-log-socket\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-node-log\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-systemd-units\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275694 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-netns\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-node-log\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-kubelet\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-systemd-units\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275813 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/343894b3-5fe5-47dc-939d-c818175ef385-ovn-node-metrics-cert\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-log-socket\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275850 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-kubelet\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-config\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-systemd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275864 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.275942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-etc-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-etc-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276078 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-var-lib-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-var-lib-openvswitch\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276167 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-slash\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-bin\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-netd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276295 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-slash\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276306 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-bin\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276319 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-env-overrides\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276349 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-cni-netd\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276364 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276406 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-ovn\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276457 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/343894b3-5fe5-47dc-939d-c818175ef385-run-ovn\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-config\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.276821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-env-overrides\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.277097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/343894b3-5fe5-47dc-939d-c818175ef385-ovnkube-script-lib\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.287277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/343894b3-5fe5-47dc-939d-c818175ef385-ovn-node-metrics-cert\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.293442 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48vf\" (UniqueName: \"kubernetes.io/projected/343894b3-5fe5-47dc-939d-c818175ef385-kube-api-access-h48vf\") pod \"ovnkube-node-mjgsp\" (UID: \"343894b3-5fe5-47dc-939d-c818175ef385\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.314996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:39 crc kubenswrapper[4766]: W0129 11:39:39.339784 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343894b3_5fe5_47dc_939d_c818175ef385.slice/crio-444b3e45d04d28c4a034f91f27228d717b3c18cfcf99e9f8b9286f6f6a0af31d WatchSource:0}: Error finding container 444b3e45d04d28c4a034f91f27228d717b3c18cfcf99e9f8b9286f6f6a0af31d: Status 404 returned error can't find the container with id 444b3e45d04d28c4a034f91f27228d717b3c18cfcf99e9f8b9286f6f6a0af31d Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.548716 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/2.log" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.550123 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/1.log" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.550288 4766 generic.go:334] "Generic (PLEG): container finished" podID="6986483f-6521-45da-9034-8576037c32ad" containerID="bd6d2609f7daaf516c85d29c744307fe0c6788ba02d9625f66fa94efe9993566" exitCode=2 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.550462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerDied","Data":"bd6d2609f7daaf516c85d29c744307fe0c6788ba02d9625f66fa94efe9993566"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.550537 4766 scope.go:117] "RemoveContainer" containerID="f08a33c85d7bb4c50e3fc2fb60c7b0f91c0bc795639c249410293ab1edd2d684" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.551332 4766 scope.go:117] "RemoveContainer" containerID="bd6d2609f7daaf516c85d29c744307fe0c6788ba02d9625f66fa94efe9993566" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.552992 4766 generic.go:334] "Generic (PLEG): container finished" podID="343894b3-5fe5-47dc-939d-c818175ef385" containerID="b46a50737751100ee301e68729bfebdc1c03268e4beda648990fab9c77157b8f" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.553056 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerDied","Data":"b46a50737751100ee301e68729bfebdc1c03268e4beda648990fab9c77157b8f"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.553087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"444b3e45d04d28c4a034f91f27228d717b3c18cfcf99e9f8b9286f6f6a0af31d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.560031 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovnkube-controller/3.log" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.562908 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovn-acl-logging/0.log" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.565736 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zn4kn_98622e63-ce1a-413d-8a0a-32610d52ab94/ovn-controller/0.log" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.567179 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.567311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.567402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.567589 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.567705 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569363 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569478 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569574 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569654 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" exitCode=0 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569742 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" exitCode=143 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.569823 4766 generic.go:334] "Generic (PLEG): container finished" podID="98622e63-ce1a-413d-8a0a-32610d52ab94" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" exitCode=143 Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570542 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570630 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570729 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570812 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570876 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570925 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.570994 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571072 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571124 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571188 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571309 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571360 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571517 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571593 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571645 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571746 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571822 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571872 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571937 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.571985 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572242 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572314 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572365 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572458 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572515 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572592 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572659 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572710 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572782 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572831 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zn4kn" event={"ID":"98622e63-ce1a-413d-8a0a-32610d52ab94","Type":"ContainerDied","Data":"f2cae48be25a036d875e619bf77b27b1a838220c53510580128157398d687d9c"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.572974 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573034 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573083 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573147 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573477 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573578 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573673 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573755 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573829 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.573901 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.594215 4766 scope.go:117] "RemoveContainer" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.630693 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zn4kn"] Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.636770 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.642528 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zn4kn"] Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.649930 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.663734 4766 scope.go:117] "RemoveContainer" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.745138 4766 scope.go:117] "RemoveContainer" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.772618 4766 scope.go:117] "RemoveContainer" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.782378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle\") pod \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.782507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util\") pod \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.782553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd2sd\" (UniqueName: \"kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd\") pod \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\" (UID: \"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6\") " Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.783228 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle" (OuterVolumeSpecName: "bundle") pod "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" (UID: "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.788924 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd" (OuterVolumeSpecName: "kube-api-access-sd2sd") pod "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" (UID: "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6"). InnerVolumeSpecName "kube-api-access-sd2sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.788946 4766 scope.go:117] "RemoveContainer" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.803544 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util" (OuterVolumeSpecName: "util") pod "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" (UID: "11a99c06-5b9b-475a-b0e8-528d1e8a9eb6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.804491 4766 scope.go:117] "RemoveContainer" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.824601 4766 scope.go:117] "RemoveContainer" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.844735 4766 scope.go:117] "RemoveContainer" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.863508 4766 scope.go:117] "RemoveContainer" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.883810 4766 scope.go:117] "RemoveContainer" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884128 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884158 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd2sd\" (UniqueName: \"kubernetes.io/projected/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-kube-api-access-sd2sd\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884171 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11a99c06-5b9b-475a-b0e8-528d1e8a9eb6-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.884406 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": container with ID starting with 012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b not found: ID does not exist" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884458 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} err="failed to get container status \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": rpc error: code = NotFound desc = could not find container \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": container with ID starting with 012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884486 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.884775 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": container with ID starting with 4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b not found: ID does not exist" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884804 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} err="failed to get container status \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": rpc error: code = NotFound desc = could not find container \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": container with ID starting with 4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.884822 4766 scope.go:117] "RemoveContainer" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.885112 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": container with ID starting with 402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26 not found: ID does not exist" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885133 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} err="failed to get container status \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": rpc error: code = NotFound desc = could not find container \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": container with ID starting with 402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885151 4766 scope.go:117] "RemoveContainer" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.885380 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": container with ID starting with c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5 not found: ID does not exist" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885400 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} err="failed to get container status \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": rpc error: code = NotFound desc = could not find container \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": container with ID starting with c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885438 4766 scope.go:117] "RemoveContainer" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.885799 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": container with ID starting with 815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d not found: ID does not exist" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885820 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} err="failed to get container status \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": rpc error: code = NotFound desc = could not find container \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": container with ID starting with 815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.885837 4766 scope.go:117] "RemoveContainer" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.886297 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": container with ID starting with 1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b not found: ID does not exist" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.886321 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} err="failed to get container status \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": rpc error: code = NotFound desc = could not find container \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": container with ID starting with 1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.886337 4766 scope.go:117] "RemoveContainer" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.886646 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": container with ID starting with 4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635 not found: ID does not exist" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.886669 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} err="failed to get container status \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": rpc error: code = NotFound desc = could not find container \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": container with ID starting with 4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.886687 4766 scope.go:117] "RemoveContainer" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.887022 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": container with ID starting with 57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097 not found: ID does not exist" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887046 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} err="failed to get container status \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": rpc error: code = NotFound desc = could not find container \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": container with ID starting with 57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887061 4766 scope.go:117] "RemoveContainer" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.887301 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": container with ID starting with 84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9 not found: ID does not exist" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887325 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} err="failed to get container status \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": rpc error: code = NotFound desc = could not find container \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": container with ID starting with 84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887342 4766 scope.go:117] "RemoveContainer" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: E0129 11:39:39.887637 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": container with ID starting with b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c not found: ID does not exist" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887658 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} err="failed to get container status \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": rpc error: code = NotFound desc = could not find container \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": container with ID starting with b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887674 4766 scope.go:117] "RemoveContainer" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887871 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} err="failed to get container status \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": rpc error: code = NotFound desc = could not find container \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": container with ID starting with 012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.887890 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.888059 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} err="failed to get container status \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": rpc error: code = NotFound desc = could not find container \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": container with ID starting with 4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.888076 4766 scope.go:117] "RemoveContainer" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.888371 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} err="failed to get container status \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": rpc error: code = NotFound desc = could not find container \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": container with ID starting with 402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.888388 4766 scope.go:117] "RemoveContainer" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.889117 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} err="failed to get container status \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": rpc error: code = NotFound desc = could not find container \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": container with ID starting with c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.889136 4766 scope.go:117] "RemoveContainer" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.889474 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} err="failed to get container status \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": rpc error: code = NotFound desc = could not find container \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": container with ID starting with 815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.889501 4766 scope.go:117] "RemoveContainer" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.890976 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} err="failed to get container status \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": rpc error: code = NotFound desc = could not find container \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": container with ID starting with 1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.891003 4766 scope.go:117] "RemoveContainer" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.891309 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} err="failed to get container status \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": rpc error: code = NotFound desc = could not find container \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": container with ID starting with 4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.891335 4766 scope.go:117] "RemoveContainer" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.891978 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} err="failed to get container status \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": rpc error: code = NotFound desc = could not find container \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": container with ID starting with 57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.892009 4766 scope.go:117] "RemoveContainer" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.893002 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} err="failed to get container status \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": rpc error: code = NotFound desc = could not find container \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": container with ID starting with 84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.893135 4766 scope.go:117] "RemoveContainer" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.894639 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} err="failed to get container status \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": rpc error: code = NotFound desc = could not find container \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": container with ID starting with b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.894665 4766 scope.go:117] "RemoveContainer" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.896559 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} err="failed to get container status \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": rpc error: code = NotFound desc = could not find container \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": container with ID starting with 012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.896584 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.898393 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} err="failed to get container status \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": rpc error: code = NotFound desc = could not find container \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": container with ID starting with 4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.898427 4766 scope.go:117] "RemoveContainer" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.898666 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} err="failed to get container status \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": rpc error: code = NotFound desc = could not find container \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": container with ID starting with 402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.898689 4766 scope.go:117] "RemoveContainer" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.899039 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} err="failed to get container status \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": rpc error: code = NotFound desc = could not find container \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": container with ID starting with c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.899099 4766 scope.go:117] "RemoveContainer" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.900909 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} err="failed to get container status \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": rpc error: code = NotFound desc = could not find container \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": container with ID starting with 815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.900934 4766 scope.go:117] "RemoveContainer" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.903808 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} err="failed to get container status \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": rpc error: code = NotFound desc = could not find container \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": container with ID starting with 1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.903831 4766 scope.go:117] "RemoveContainer" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904084 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} err="failed to get container status \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": rpc error: code = NotFound desc = could not find container \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": container with ID starting with 4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904101 4766 scope.go:117] "RemoveContainer" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904365 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} err="failed to get container status \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": rpc error: code = NotFound desc = could not find container \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": container with ID starting with 57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904382 4766 scope.go:117] "RemoveContainer" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904800 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} err="failed to get container status \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": rpc error: code = NotFound desc = could not find container \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": container with ID starting with 84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.904827 4766 scope.go:117] "RemoveContainer" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.905486 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} err="failed to get container status \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": rpc error: code = NotFound desc = could not find container \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": container with ID starting with b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.905511 4766 scope.go:117] "RemoveContainer" containerID="012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.905837 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b"} err="failed to get container status \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": rpc error: code = NotFound desc = could not find container \"012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b\": container with ID starting with 012ba785745240e8df27a0a674fa8d864d95569f2bbed7fe38919d130f186e9b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.905855 4766 scope.go:117] "RemoveContainer" containerID="4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.906068 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b"} err="failed to get container status \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": rpc error: code = NotFound desc = could not find container \"4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b\": container with ID starting with 4fece212a715318eca7821c40626aa12b00bce174a544f754be33dcd01d0327b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.906085 4766 scope.go:117] "RemoveContainer" containerID="402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.906736 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26"} err="failed to get container status \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": rpc error: code = NotFound desc = could not find container \"402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26\": container with ID starting with 402f5ebe7f0037f8c7c7e4afb5d0f4de74f3b4df89336169aa1a3503c15d8a26 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.906778 4766 scope.go:117] "RemoveContainer" containerID="c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907056 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5"} err="failed to get container status \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": rpc error: code = NotFound desc = could not find container \"c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5\": container with ID starting with c64e8ab91eb0088922c0d7c8f5a3d73ad96481cc520c58ccdcea45204523b6c5 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907094 4766 scope.go:117] "RemoveContainer" containerID="815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907577 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d"} err="failed to get container status \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": rpc error: code = NotFound desc = could not find container \"815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d\": container with ID starting with 815fd9f014b7933abec5abf4ffcc65fdb7d3893984dfa786f31e2ac377726f1d not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907605 4766 scope.go:117] "RemoveContainer" containerID="1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907859 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b"} err="failed to get container status \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": rpc error: code = NotFound desc = could not find container \"1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b\": container with ID starting with 1bde24c8bcf74b7f657d00a57b55d13b2956f81c2e797659464bef6255dce63b not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.907882 4766 scope.go:117] "RemoveContainer" containerID="4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908275 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635"} err="failed to get container status \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": rpc error: code = NotFound desc = could not find container \"4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635\": container with ID starting with 4837f637928950e448eecacbc11d17284ee9f1945b01942ecef8a14149c93635 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908292 4766 scope.go:117] "RemoveContainer" containerID="57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908542 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097"} err="failed to get container status \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": rpc error: code = NotFound desc = could not find container \"57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097\": container with ID starting with 57c9866e4de2ab33b8a1f90343de13d1d79542e1d8217481ed640107a03f1097 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908563 4766 scope.go:117] "RemoveContainer" containerID="84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908818 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9"} err="failed to get container status \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": rpc error: code = NotFound desc = could not find container \"84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9\": container with ID starting with 84268dc11d86ff2d3b5d785bef87221b95c376220e83a3777c51c46d6ef592c9 not found: ID does not exist" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.908841 4766 scope.go:117] "RemoveContainer" containerID="b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c" Jan 29 11:39:39 crc kubenswrapper[4766]: I0129 11:39:39.909092 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c"} err="failed to get container status \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": rpc error: code = NotFound desc = could not find container \"b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c\": container with ID starting with b9303b85658ee78304e30ce00b61037f8889c97fbc8a8264c297831870c9594c not found: ID does not exist" Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.582982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" event={"ID":"11a99c06-5b9b-475a-b0e8-528d1e8a9eb6","Type":"ContainerDied","Data":"853aab96fd4e09d9cadc835712d4857947fb7f0288b605c08191e538b7e30336"} Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.583305 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853aab96fd4e09d9cadc835712d4857947fb7f0288b605c08191e538b7e30336" Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.583013 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6" Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.585401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"908c4e4d4731303a90554f0f175c1aef6c1a9a6ada11691dbba43e3827231511"} Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.585452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"5c44973642930e389c07bc456c359a593a58de9d307989a0126aec48ff830b9a"} Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.585461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"1ad141c87b0401c3f9892511aec3cd695b35fbe12e9643ee9970bb2292c5dacf"} Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.587553 4766 generic.go:334] "Generic (PLEG): container finished" podID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerID="8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01" exitCode=0 Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.587596 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerDied","Data":"8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01"} Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.597700 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnk2d_6986483f-6521-45da-9034-8576037c32ad/kube-multus/2.log" Jan 29 11:39:40 crc kubenswrapper[4766]: I0129 11:39:40.597762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnk2d" event={"ID":"6986483f-6521-45da-9034-8576037c32ad","Type":"ContainerStarted","Data":"6a64b11a9749ac48119e721cca42cbdc99dd76ae6ac729d363adf457be0d597a"} Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.239231 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98622e63-ce1a-413d-8a0a-32610d52ab94" path="/var/lib/kubelet/pods/98622e63-ce1a-413d-8a0a-32610d52ab94/volumes" Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.605000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerStarted","Data":"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0"} Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.608530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"ee48f23f47c1a344e32f705b569020399b369f94eea0765712eb5b571ae42b0a"} Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.608582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"8c68820e27a8bcbd98b665f74f20a149150755c9ae3e74313b98c51fef808766"} Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.608599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"520c71583bda5580aa695476f4f0dce6b4187a3bc28ed6a28917b27147f82659"} Jan 29 11:39:41 crc kubenswrapper[4766]: I0129 11:39:41.622879 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qbw8" podStartSLOduration=2.003954629 podStartE2EDuration="4.622861662s" podCreationTimestamp="2026-01-29 11:39:37 +0000 UTC" firstStartedPulling="2026-01-29 11:39:38.542558053 +0000 UTC m=+1115.654951064" lastFinishedPulling="2026-01-29 11:39:41.161465086 +0000 UTC m=+1118.273858097" observedRunningTime="2026-01-29 11:39:41.622464671 +0000 UTC m=+1118.734857682" watchObservedRunningTime="2026-01-29 11:39:41.622861662 +0000 UTC m=+1118.735254673" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.624347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"12833d8675e377d2b2fc8a97b8a7b114c6329f423cea3544dba81da2cf46be39"} Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976113 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-2wgfm"] Jan 29 11:39:43 crc kubenswrapper[4766]: E0129 11:39:43.976365 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="util" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976378 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="util" Jan 29 11:39:43 crc kubenswrapper[4766]: E0129 11:39:43.976389 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="extract" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976395 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="extract" Jan 29 11:39:43 crc kubenswrapper[4766]: E0129 11:39:43.976404 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="pull" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976428 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="pull" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976526 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a99c06-5b9b-475a-b0e8-528d1e8a9eb6" containerName="extract" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.976858 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.979237 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.979487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pxh75" Jan 29 11:39:43 crc kubenswrapper[4766]: I0129 11:39:43.979626 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 11:39:44 crc kubenswrapper[4766]: I0129 11:39:44.133765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkz2x\" (UniqueName: \"kubernetes.io/projected/eb648e32-b2f9-44e3-8a32-fd27af7c41cc-kube-api-access-hkz2x\") pod \"nmstate-operator-646758c888-2wgfm\" (UID: \"eb648e32-b2f9-44e3-8a32-fd27af7c41cc\") " pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: I0129 11:39:44.235297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkz2x\" (UniqueName: \"kubernetes.io/projected/eb648e32-b2f9-44e3-8a32-fd27af7c41cc-kube-api-access-hkz2x\") pod \"nmstate-operator-646758c888-2wgfm\" (UID: \"eb648e32-b2f9-44e3-8a32-fd27af7c41cc\") " pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: I0129 11:39:44.257622 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkz2x\" (UniqueName: \"kubernetes.io/projected/eb648e32-b2f9-44e3-8a32-fd27af7c41cc-kube-api-access-hkz2x\") pod \"nmstate-operator-646758c888-2wgfm\" (UID: \"eb648e32-b2f9-44e3-8a32-fd27af7c41cc\") " pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: I0129 11:39:44.291075 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: E0129 11:39:44.326174 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4de169a2af3884864c92063d2107bbbb4ee04e555295d5b6c36006abbfd8c68f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:39:44 crc kubenswrapper[4766]: E0129 11:39:44.326674 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4de169a2af3884864c92063d2107bbbb4ee04e555295d5b6c36006abbfd8c68f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: E0129 11:39:44.326700 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4de169a2af3884864c92063d2107bbbb4ee04e555295d5b6c36006abbfd8c68f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:44 crc kubenswrapper[4766]: E0129 11:39:44.326751 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-2wgfm_openshift-nmstate(eb648e32-b2f9-44e3-8a32-fd27af7c41cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-2wgfm_openshift-nmstate(eb648e32-b2f9-44e3-8a32-fd27af7c41cc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4de169a2af3884864c92063d2107bbbb4ee04e555295d5b6c36006abbfd8c68f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" podUID="eb648e32-b2f9-44e3-8a32-fd27af7c41cc" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.641119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" event={"ID":"343894b3-5fe5-47dc-939d-c818175ef385","Type":"ContainerStarted","Data":"2b62df7ab2c5791b3022b5f28f8e94c075b5c8113a590b578f582cc89adf1551"} Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.642127 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.642145 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.642156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.665159 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.670695 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:39:45 crc kubenswrapper[4766]: I0129 11:39:45.673770 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" podStartSLOduration=7.673755564 podStartE2EDuration="7.673755564s" podCreationTimestamp="2026-01-29 11:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:39:45.669711069 +0000 UTC m=+1122.782104080" watchObservedRunningTime="2026-01-29 11:39:45.673755564 +0000 UTC m=+1122.786148575" Jan 29 11:39:47 crc kubenswrapper[4766]: I0129 11:39:47.657033 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:47 crc kubenswrapper[4766]: I0129 11:39:47.657089 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:47 crc kubenswrapper[4766]: I0129 11:39:47.700790 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:48 crc kubenswrapper[4766]: I0129 11:39:48.376771 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-2wgfm"] Jan 29 11:39:48 crc kubenswrapper[4766]: I0129 11:39:48.377191 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:48 crc kubenswrapper[4766]: I0129 11:39:48.377673 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:48 crc kubenswrapper[4766]: E0129 11:39:48.418148 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4bb4f4e5ea8cae55ab6be1d9a701a378d7cfa760cc9a1fc0904409427f618459): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:39:48 crc kubenswrapper[4766]: E0129 11:39:48.418218 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4bb4f4e5ea8cae55ab6be1d9a701a378d7cfa760cc9a1fc0904409427f618459): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:48 crc kubenswrapper[4766]: E0129 11:39:48.418245 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4bb4f4e5ea8cae55ab6be1d9a701a378d7cfa760cc9a1fc0904409427f618459): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:39:48 crc kubenswrapper[4766]: E0129 11:39:48.418340 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-2wgfm_openshift-nmstate(eb648e32-b2f9-44e3-8a32-fd27af7c41cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-2wgfm_openshift-nmstate(eb648e32-b2f9-44e3-8a32-fd27af7c41cc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-2wgfm_openshift-nmstate_eb648e32-b2f9-44e3-8a32-fd27af7c41cc_0(4bb4f4e5ea8cae55ab6be1d9a701a378d7cfa760cc9a1fc0904409427f618459): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" podUID="eb648e32-b2f9-44e3-8a32-fd27af7c41cc" Jan 29 11:39:48 crc kubenswrapper[4766]: I0129 11:39:48.697366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:50 crc kubenswrapper[4766]: I0129 11:39:50.099635 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:50 crc kubenswrapper[4766]: I0129 11:39:50.666220 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qbw8" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="registry-server" containerID="cri-o://db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0" gracePeriod=2 Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.018471 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.127019 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities\") pod \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.127116 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content\") pod \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.127141 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5wjw\" (UniqueName: \"kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw\") pod \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\" (UID: \"da8c6b4d-ef30-4a5f-830b-c5c508b2464e\") " Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.128112 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities" (OuterVolumeSpecName: "utilities") pod "da8c6b4d-ef30-4a5f-830b-c5c508b2464e" (UID: "da8c6b4d-ef30-4a5f-830b-c5c508b2464e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.135978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw" (OuterVolumeSpecName: "kube-api-access-q5wjw") pod "da8c6b4d-ef30-4a5f-830b-c5c508b2464e" (UID: "da8c6b4d-ef30-4a5f-830b-c5c508b2464e"). InnerVolumeSpecName "kube-api-access-q5wjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.150035 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8c6b4d-ef30-4a5f-830b-c5c508b2464e" (UID: "da8c6b4d-ef30-4a5f-830b-c5c508b2464e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.228453 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.228480 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.228492 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5wjw\" (UniqueName: \"kubernetes.io/projected/da8c6b4d-ef30-4a5f-830b-c5c508b2464e-kube-api-access-q5wjw\") on node \"crc\" DevicePath \"\"" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.673774 4766 generic.go:334] "Generic (PLEG): container finished" podID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerID="db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0" exitCode=0 Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.673820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerDied","Data":"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0"} Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.673849 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qbw8" event={"ID":"da8c6b4d-ef30-4a5f-830b-c5c508b2464e","Type":"ContainerDied","Data":"ebbb1540b210a852fcf73884b61553a7cab7a8365a73e551c563d34a4741fed1"} Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.673869 4766 scope.go:117] "RemoveContainer" containerID="db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.673997 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qbw8" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.694619 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.694681 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qbw8"] Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.697457 4766 scope.go:117] "RemoveContainer" containerID="8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.710798 4766 scope.go:117] "RemoveContainer" containerID="afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.736679 4766 scope.go:117] "RemoveContainer" containerID="db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0" Jan 29 11:39:51 crc kubenswrapper[4766]: E0129 11:39:51.737186 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0\": container with ID starting with db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0 not found: ID does not exist" containerID="db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.737229 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0"} err="failed to get container status \"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0\": rpc error: code = NotFound desc = could not find container \"db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0\": container with ID starting with db37cd300c787918bdc61e5d151c7aa5cbc8b876b2431b062f7b08454019dfe0 not found: ID does not exist" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.737255 4766 scope.go:117] "RemoveContainer" containerID="8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01" Jan 29 11:39:51 crc kubenswrapper[4766]: E0129 11:39:51.737748 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01\": container with ID starting with 8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01 not found: ID does not exist" containerID="8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.737776 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01"} err="failed to get container status \"8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01\": rpc error: code = NotFound desc = could not find container \"8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01\": container with ID starting with 8f591c45b8e087abb0f459795101b637542c76d37f26135326fd54b48bc97f01 not found: ID does not exist" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.737791 4766 scope.go:117] "RemoveContainer" containerID="afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb" Jan 29 11:39:51 crc kubenswrapper[4766]: E0129 11:39:51.738196 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb\": container with ID starting with afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb not found: ID does not exist" containerID="afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb" Jan 29 11:39:51 crc kubenswrapper[4766]: I0129 11:39:51.738266 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb"} err="failed to get container status \"afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb\": rpc error: code = NotFound desc = could not find container \"afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb\": container with ID starting with afe39e855e1e6ef2e73f186c34b03242eb14695a3c38a9a8b81212c0929c50cb not found: ID does not exist" Jan 29 11:39:53 crc kubenswrapper[4766]: I0129 11:39:53.231422 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" path="/var/lib/kubelet/pods/da8c6b4d-ef30-4a5f-830b-c5c508b2464e/volumes" Jan 29 11:40:01 crc kubenswrapper[4766]: I0129 11:40:01.224308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:40:01 crc kubenswrapper[4766]: I0129 11:40:01.225444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" Jan 29 11:40:01 crc kubenswrapper[4766]: I0129 11:40:01.439245 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-2wgfm"] Jan 29 11:40:01 crc kubenswrapper[4766]: I0129 11:40:01.727224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" event={"ID":"eb648e32-b2f9-44e3-8a32-fd27af7c41cc","Type":"ContainerStarted","Data":"6be013f47f7262da243ece1823153b67dac359d88386a488795ced01040af8bc"} Jan 29 11:40:04 crc kubenswrapper[4766]: I0129 11:40:04.743740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" event={"ID":"eb648e32-b2f9-44e3-8a32-fd27af7c41cc","Type":"ContainerStarted","Data":"06ecb95f72b97dc45389cc466d3cdc73a22bac99bb6805c49d8efdd28eea91e6"} Jan 29 11:40:04 crc kubenswrapper[4766]: I0129 11:40:04.762548 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-2wgfm" podStartSLOduration=19.195537395 podStartE2EDuration="21.762529182s" podCreationTimestamp="2026-01-29 11:39:43 +0000 UTC" firstStartedPulling="2026-01-29 11:40:01.451342084 +0000 UTC m=+1138.563735095" lastFinishedPulling="2026-01-29 11:40:04.018333861 +0000 UTC m=+1141.130726882" observedRunningTime="2026-01-29 11:40:04.757956292 +0000 UTC m=+1141.870349383" watchObservedRunningTime="2026-01-29 11:40:04.762529182 +0000 UTC m=+1141.874922203" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.788650 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-48dfn"] Jan 29 11:40:05 crc kubenswrapper[4766]: E0129 11:40:05.790365 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="extract-utilities" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.790480 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="extract-utilities" Jan 29 11:40:05 crc kubenswrapper[4766]: E0129 11:40:05.790562 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="registry-server" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.790631 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="registry-server" Jan 29 11:40:05 crc kubenswrapper[4766]: E0129 11:40:05.790706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="extract-content" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.790775 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="extract-content" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.790951 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8c6b4d-ef30-4a5f-830b-c5c508b2464e" containerName="registry-server" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.791675 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.794290 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jx8td" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.809713 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn"] Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.810568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.812783 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.814110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9xn\" (UniqueName: \"kubernetes.io/projected/49980fb9-2330-4be5-9d44-e308d3f2d79b-kube-api-access-hb9xn\") pod \"nmstate-metrics-54757c584b-48dfn\" (UID: \"49980fb9-2330-4be5-9d44-e308d3f2d79b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.814232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7nz\" (UniqueName: \"kubernetes.io/projected/f301b6de-8128-43bf-b3cd-92e1ad13b932-kube-api-access-8n7nz\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.814434 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.815515 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-48dfn"] Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.833591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn"] Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.843010 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mnjxp"] Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.843845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n7nz\" (UniqueName: \"kubernetes.io/projected/f301b6de-8128-43bf-b3cd-92e1ad13b932-kube-api-access-8n7nz\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-ovs-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915115 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9g4\" (UniqueName: \"kubernetes.io/projected/8bf08f13-11cf-4a07-b66f-f36591ae076e-kube-api-access-nw9g4\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915138 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9xn\" (UniqueName: \"kubernetes.io/projected/49980fb9-2330-4be5-9d44-e308d3f2d79b-kube-api-access-hb9xn\") pod \"nmstate-metrics-54757c584b-48dfn\" (UID: \"49980fb9-2330-4be5-9d44-e308d3f2d79b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915187 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-dbus-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.915212 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-nmstate-lock\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:05 crc kubenswrapper[4766]: E0129 11:40:05.915615 4766 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 11:40:05 crc kubenswrapper[4766]: E0129 11:40:05.915670 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair podName:f301b6de-8128-43bf-b3cd-92e1ad13b932 nodeName:}" failed. No retries permitted until 2026-01-29 11:40:06.415651927 +0000 UTC m=+1143.528044938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-6gndn" (UID: "f301b6de-8128-43bf-b3cd-92e1ad13b932") : secret "openshift-nmstate-webhook" not found Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.983712 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq"] Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.984390 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.986012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n7nz\" (UniqueName: \"kubernetes.io/projected/f301b6de-8128-43bf-b3cd-92e1ad13b932-kube-api-access-8n7nz\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.988819 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cn9zb" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.990620 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.990648 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.991008 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9xn\" (UniqueName: \"kubernetes.io/projected/49980fb9-2330-4be5-9d44-e308d3f2d79b-kube-api-access-hb9xn\") pod \"nmstate-metrics-54757c584b-48dfn\" (UID: \"49980fb9-2330-4be5-9d44-e308d3f2d79b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" Jan 29 11:40:05 crc kubenswrapper[4766]: I0129 11:40:05.993857 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq"] Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/603cfc4d-f620-41af-98bc-06d98fcaa229-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015748 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-ovs-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8xs\" (UniqueName: \"kubernetes.io/projected/603cfc4d-f620-41af-98bc-06d98fcaa229-kube-api-access-gx8xs\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9g4\" (UniqueName: \"kubernetes.io/projected/8bf08f13-11cf-4a07-b66f-f36591ae076e-kube-api-access-nw9g4\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.015979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-ovs-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.016029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-dbus-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.016220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-nmstate-lock\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.016326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-nmstate-lock\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.016633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bf08f13-11cf-4a07-b66f-f36591ae076e-dbus-socket\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.036689 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9g4\" (UniqueName: \"kubernetes.io/projected/8bf08f13-11cf-4a07-b66f-f36591ae076e-kube-api-access-nw9g4\") pod \"nmstate-handler-mnjxp\" (UID: \"8bf08f13-11cf-4a07-b66f-f36591ae076e\") " pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.108933 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.116702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/603cfc4d-f620-41af-98bc-06d98fcaa229-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.116770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8xs\" (UniqueName: \"kubernetes.io/projected/603cfc4d-f620-41af-98bc-06d98fcaa229-kube-api-access-gx8xs\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.116857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: E0129 11:40:06.116962 4766 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 11:40:06 crc kubenswrapper[4766]: E0129 11:40:06.117035 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert podName:603cfc4d-f620-41af-98bc-06d98fcaa229 nodeName:}" failed. No retries permitted until 2026-01-29 11:40:06.617020345 +0000 UTC m=+1143.729413356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-v72pq" (UID: "603cfc4d-f620-41af-98bc-06d98fcaa229") : secret "plugin-serving-cert" not found Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.117939 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/603cfc4d-f620-41af-98bc-06d98fcaa229-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.144966 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8xs\" (UniqueName: \"kubernetes.io/projected/603cfc4d-f620-41af-98bc-06d98fcaa229-kube-api-access-gx8xs\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.157827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:06 crc kubenswrapper[4766]: W0129 11:40:06.198441 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf08f13_11cf_4a07_b66f_f36591ae076e.slice/crio-0db751b4abacefb95b03ce2ea8769064512639aaaff31062fd7539aaa7323226 WatchSource:0}: Error finding container 0db751b4abacefb95b03ce2ea8769064512639aaaff31062fd7539aaa7323226: Status 404 returned error can't find the container with id 0db751b4abacefb95b03ce2ea8769064512639aaaff31062fd7539aaa7323226 Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.207462 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d495d9cd8-4tsfb"] Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.208378 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.226239 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d495d9cd8-4tsfb"] Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.328882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-service-ca\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.328930 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.328989 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-oauth-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.329009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-oauth-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.329035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.329052 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8bt\" (UniqueName: \"kubernetes.io/projected/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-kube-api-access-5r8bt\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.329076 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-trusted-ca-bundle\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.365098 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-48dfn"] Jan 29 11:40:06 crc kubenswrapper[4766]: W0129 11:40:06.371292 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49980fb9_2330_4be5_9d44_e308d3f2d79b.slice/crio-6f58c64ddefdfba3b4dbd98159d7a017270f0ea8027cb0daaf1e181095ed5f9d WatchSource:0}: Error finding container 6f58c64ddefdfba3b4dbd98159d7a017270f0ea8027cb0daaf1e181095ed5f9d: Status 404 returned error can't find the container with id 6f58c64ddefdfba3b4dbd98159d7a017270f0ea8027cb0daaf1e181095ed5f9d Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-service-ca\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-oauth-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-oauth-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429771 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8bt\" (UniqueName: \"kubernetes.io/projected/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-kube-api-access-5r8bt\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-trusted-ca-bundle\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.429856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.431825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-service-ca\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.432001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.432382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-trusted-ca-bundle\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.432643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-oauth-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.436408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-serving-cert\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.436498 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-console-oauth-config\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.437243 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f301b6de-8128-43bf-b3cd-92e1ad13b932-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6gndn\" (UID: \"f301b6de-8128-43bf-b3cd-92e1ad13b932\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.451147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8bt\" (UniqueName: \"kubernetes.io/projected/e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9-kube-api-access-5r8bt\") pod \"console-5d495d9cd8-4tsfb\" (UID: \"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9\") " pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.542217 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.635128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.640104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfc4d-f620-41af-98bc-06d98fcaa229-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-v72pq\" (UID: \"603cfc4d-f620-41af-98bc-06d98fcaa229\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.658776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.730185 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d495d9cd8-4tsfb"] Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.730497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.760148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d495d9cd8-4tsfb" event={"ID":"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9","Type":"ContainerStarted","Data":"b71c3d5d34435faf6a6c6a1c96d7f980d3ef7480ea387864b1f49f69716862b8"} Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.761586 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mnjxp" event={"ID":"8bf08f13-11cf-4a07-b66f-f36591ae076e","Type":"ContainerStarted","Data":"0db751b4abacefb95b03ce2ea8769064512639aaaff31062fd7539aaa7323226"} Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.762442 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" event={"ID":"49980fb9-2330-4be5-9d44-e308d3f2d79b","Type":"ContainerStarted","Data":"6f58c64ddefdfba3b4dbd98159d7a017270f0ea8027cb0daaf1e181095ed5f9d"} Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.849466 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq"] Jan 29 11:40:06 crc kubenswrapper[4766]: W0129 11:40:06.856094 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod603cfc4d_f620_41af_98bc_06d98fcaa229.slice/crio-22ade4210ce20738f442d76a872fb238e02b0aedc5cb293623173750d5f19bd8 WatchSource:0}: Error finding container 22ade4210ce20738f442d76a872fb238e02b0aedc5cb293623173750d5f19bd8: Status 404 returned error can't find the container with id 22ade4210ce20738f442d76a872fb238e02b0aedc5cb293623173750d5f19bd8 Jan 29 11:40:06 crc kubenswrapper[4766]: I0129 11:40:06.949055 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn"] Jan 29 11:40:06 crc kubenswrapper[4766]: W0129 11:40:06.949742 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf301b6de_8128_43bf_b3cd_92e1ad13b932.slice/crio-d87fe8db7246893940595aba3feb7dd94f31725f60ffc87d66883ecc84d12b53 WatchSource:0}: Error finding container d87fe8db7246893940595aba3feb7dd94f31725f60ffc87d66883ecc84d12b53: Status 404 returned error can't find the container with id d87fe8db7246893940595aba3feb7dd94f31725f60ffc87d66883ecc84d12b53 Jan 29 11:40:07 crc kubenswrapper[4766]: I0129 11:40:07.769936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" event={"ID":"f301b6de-8128-43bf-b3cd-92e1ad13b932","Type":"ContainerStarted","Data":"d87fe8db7246893940595aba3feb7dd94f31725f60ffc87d66883ecc84d12b53"} Jan 29 11:40:07 crc kubenswrapper[4766]: I0129 11:40:07.771091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" event={"ID":"603cfc4d-f620-41af-98bc-06d98fcaa229","Type":"ContainerStarted","Data":"22ade4210ce20738f442d76a872fb238e02b0aedc5cb293623173750d5f19bd8"} Jan 29 11:40:07 crc kubenswrapper[4766]: I0129 11:40:07.772773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d495d9cd8-4tsfb" event={"ID":"e72861ec-2a1f-40aa-8d0e-58c7ea1e3fa9","Type":"ContainerStarted","Data":"1441e1134c0575ba3788fda1f50f74665f5add4b17f4e7d112b32b03f597e75d"} Jan 29 11:40:07 crc kubenswrapper[4766]: I0129 11:40:07.796539 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d495d9cd8-4tsfb" podStartSLOduration=1.796459104 podStartE2EDuration="1.796459104s" podCreationTimestamp="2026-01-29 11:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:40:07.79316358 +0000 UTC m=+1144.905556591" watchObservedRunningTime="2026-01-29 11:40:07.796459104 +0000 UTC m=+1144.908852125" Jan 29 11:40:09 crc kubenswrapper[4766]: I0129 11:40:09.338921 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjgsp" Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.793256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" event={"ID":"f301b6de-8128-43bf-b3cd-92e1ad13b932","Type":"ContainerStarted","Data":"f884349f81733d2ba9500f19d0544f45dff73639ff091bfe89bded09d1d38329"} Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.793672 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.797686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" event={"ID":"603cfc4d-f620-41af-98bc-06d98fcaa229","Type":"ContainerStarted","Data":"2fb29fa69b0a6055c3a4a0ebfd8e87ba337bcb3b30fd22d6e71ecc86e1c61696"} Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.799251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mnjxp" event={"ID":"8bf08f13-11cf-4a07-b66f-f36591ae076e","Type":"ContainerStarted","Data":"9b3e25c325832e644652ccae538da97c63c77d0f85435cae03ebf60ca4f595ac"} Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.799373 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.801078 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" event={"ID":"49980fb9-2330-4be5-9d44-e308d3f2d79b","Type":"ContainerStarted","Data":"e19e394518c487a18912c5f80d5f3968e0915f7d614f5682a69a91e1fca33bb7"} Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.810041 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" podStartSLOduration=2.801509686 podStartE2EDuration="6.81002238s" podCreationTimestamp="2026-01-29 11:40:05 +0000 UTC" firstStartedPulling="2026-01-29 11:40:06.959794071 +0000 UTC m=+1144.072187082" lastFinishedPulling="2026-01-29 11:40:10.968306745 +0000 UTC m=+1148.080699776" observedRunningTime="2026-01-29 11:40:11.809089763 +0000 UTC m=+1148.921482804" watchObservedRunningTime="2026-01-29 11:40:11.81002238 +0000 UTC m=+1148.922415401" Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.825576 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-v72pq" podStartSLOduration=2.747730046 podStartE2EDuration="6.825554382s" podCreationTimestamp="2026-01-29 11:40:05 +0000 UTC" firstStartedPulling="2026-01-29 11:40:06.858434518 +0000 UTC m=+1143.970827529" lastFinishedPulling="2026-01-29 11:40:10.936258844 +0000 UTC m=+1148.048651865" observedRunningTime="2026-01-29 11:40:11.822439983 +0000 UTC m=+1148.934833014" watchObservedRunningTime="2026-01-29 11:40:11.825554382 +0000 UTC m=+1148.937947423" Jan 29 11:40:12 crc kubenswrapper[4766]: I0129 11:40:11.846700 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mnjxp" podStartSLOduration=2.113458942 podStartE2EDuration="6.846679603s" podCreationTimestamp="2026-01-29 11:40:05 +0000 UTC" firstStartedPulling="2026-01-29 11:40:06.20364433 +0000 UTC m=+1143.316037341" lastFinishedPulling="2026-01-29 11:40:10.936864991 +0000 UTC m=+1148.049258002" observedRunningTime="2026-01-29 11:40:11.843224574 +0000 UTC m=+1148.955617605" watchObservedRunningTime="2026-01-29 11:40:11.846679603 +0000 UTC m=+1148.959072614" Jan 29 11:40:15 crc kubenswrapper[4766]: I0129 11:40:15.827774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" event={"ID":"49980fb9-2330-4be5-9d44-e308d3f2d79b","Type":"ContainerStarted","Data":"b1c2c2477b5e2f8e88b61325a951bc208d4c7bb3ee046e901ac8fa388f0936e2"} Jan 29 11:40:15 crc kubenswrapper[4766]: I0129 11:40:15.851347 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-48dfn" podStartSLOduration=1.7851264420000001 podStartE2EDuration="10.851322558s" podCreationTimestamp="2026-01-29 11:40:05 +0000 UTC" firstStartedPulling="2026-01-29 11:40:06.373259655 +0000 UTC m=+1143.485652666" lastFinishedPulling="2026-01-29 11:40:15.439455771 +0000 UTC m=+1152.551848782" observedRunningTime="2026-01-29 11:40:15.849381683 +0000 UTC m=+1152.961774724" watchObservedRunningTime="2026-01-29 11:40:15.851322558 +0000 UTC m=+1152.963715579" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.182269 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mnjxp" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.542353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.542447 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.547765 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.840071 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d495d9cd8-4tsfb" Jan 29 11:40:16 crc kubenswrapper[4766]: I0129 11:40:16.885707 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:40:26 crc kubenswrapper[4766]: I0129 11:40:26.737170 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6gndn" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.130328 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz"] Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.131708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.133763 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.146611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz"] Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.230661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.230738 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8l8\" (UniqueName: \"kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.230764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.331656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.331759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8l8\" (UniqueName: \"kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.331788 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.332166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.332196 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.349477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8l8\" (UniqueName: \"kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.448681 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.630596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz"] Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.962471 4766 generic.go:334] "Generic (PLEG): container finished" podID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerID="31d931b46574adeefd6d84255fd650d1957afb23ab3f4fe9f928e1a937f22e84" exitCode=0 Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.962509 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" event={"ID":"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e","Type":"ContainerDied","Data":"31d931b46574adeefd6d84255fd650d1957afb23ab3f4fe9f928e1a937f22e84"} Jan 29 11:40:38 crc kubenswrapper[4766]: I0129 11:40:38.962535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" event={"ID":"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e","Type":"ContainerStarted","Data":"e2772239be2aba04e0cf932ee9eac0fdde5fc598bb1b68c80773f2f79507dee8"} Jan 29 11:40:40 crc kubenswrapper[4766]: I0129 11:40:40.975459 4766 generic.go:334] "Generic (PLEG): container finished" podID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerID="44b85bfe3f2fb6b0f03194d2c24be83662a1caeb86f76b3ab2cad2577185bd69" exitCode=0 Jan 29 11:40:40 crc kubenswrapper[4766]: I0129 11:40:40.975546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" event={"ID":"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e","Type":"ContainerDied","Data":"44b85bfe3f2fb6b0f03194d2c24be83662a1caeb86f76b3ab2cad2577185bd69"} Jan 29 11:40:41 crc kubenswrapper[4766]: I0129 11:40:41.936941 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ncttr" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" containerID="cri-o://835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f" gracePeriod=15 Jan 29 11:40:41 crc kubenswrapper[4766]: I0129 11:40:41.982755 4766 generic.go:334] "Generic (PLEG): container finished" podID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerID="ed2042b08470a08b19671cd9fd2b3e660741d1a93f7b633d2a4f3588ce41e760" exitCode=0 Jan 29 11:40:41 crc kubenswrapper[4766]: I0129 11:40:41.982797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" event={"ID":"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e","Type":"ContainerDied","Data":"ed2042b08470a08b19671cd9fd2b3e660741d1a93f7b633d2a4f3588ce41e760"} Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.296890 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ncttr_569bc384-3b96-4207-8d46-5a27bf7f21cd/console/0.log" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.297185 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481448 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481546 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481643 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.481668 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxk6x\" (UniqueName: \"kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x\") pod \"569bc384-3b96-4207-8d46-5a27bf7f21cd\" (UID: \"569bc384-3b96-4207-8d46-5a27bf7f21cd\") " Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.482534 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config" (OuterVolumeSpecName: "console-config") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.482551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.482595 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca" (OuterVolumeSpecName: "service-ca") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.482605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.487714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x" (OuterVolumeSpecName: "kube-api-access-cxk6x") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "kube-api-access-cxk6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.488171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.488432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "569bc384-3b96-4207-8d46-5a27bf7f21cd" (UID: "569bc384-3b96-4207-8d46-5a27bf7f21cd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582522 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582553 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582564 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/569bc384-3b96-4207-8d46-5a27bf7f21cd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582572 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582581 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582589 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxk6x\" (UniqueName: \"kubernetes.io/projected/569bc384-3b96-4207-8d46-5a27bf7f21cd-kube-api-access-cxk6x\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.582598 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/569bc384-3b96-4207-8d46-5a27bf7f21cd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:42 crc kubenswrapper[4766]: I0129 11:40:42.989473 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ncttr_569bc384-3b96-4207-8d46-5a27bf7f21cd/console/0.log" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:42.990120 4766 generic.go:334] "Generic (PLEG): container finished" podID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerID="835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f" exitCode=2 Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:42.990187 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ncttr" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:42.990187 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ncttr" event={"ID":"569bc384-3b96-4207-8d46-5a27bf7f21cd","Type":"ContainerDied","Data":"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f"} Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:42.990231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ncttr" event={"ID":"569bc384-3b96-4207-8d46-5a27bf7f21cd","Type":"ContainerDied","Data":"1e5c00803a283cdc6130f856d8323901c2a7547e6dff9412f2efb558d407c1d0"} Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:42.990250 4766 scope.go:117] "RemoveContainer" containerID="835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.007603 4766 scope.go:117] "RemoveContainer" containerID="835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f" Jan 29 11:40:43 crc kubenswrapper[4766]: E0129 11:40:43.007976 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f\": container with ID starting with 835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f not found: ID does not exist" containerID="835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.008003 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f"} err="failed to get container status \"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f\": rpc error: code = NotFound desc = could not find container \"835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f\": container with ID starting with 835a2e7e5ae0c51e2820db21831e46bafad02533bab7dd4328154a7c3c0f665f not found: ID does not exist" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.017944 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.023107 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ncttr"] Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.210765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.231397 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" path="/var/lib/kubelet/pods/569bc384-3b96-4207-8d46-5a27bf7f21cd/volumes" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.390586 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle\") pod \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.390939 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw8l8\" (UniqueName: \"kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8\") pod \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.390980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util\") pod \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\" (UID: \"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e\") " Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.391729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle" (OuterVolumeSpecName: "bundle") pod "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" (UID: "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.395577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8" (OuterVolumeSpecName: "kube-api-access-fw8l8") pod "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" (UID: "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e"). InnerVolumeSpecName "kube-api-access-fw8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.404856 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util" (OuterVolumeSpecName: "util") pod "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" (UID: "a5d54a74-7c01-406c-9b46-c2dd7df8fb9e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.492853 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.492923 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.492934 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw8l8\" (UniqueName: \"kubernetes.io/projected/a5d54a74-7c01-406c-9b46-c2dd7df8fb9e-kube-api-access-fw8l8\") on node \"crc\" DevicePath \"\"" Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.999285 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" event={"ID":"a5d54a74-7c01-406c-9b46-c2dd7df8fb9e","Type":"ContainerDied","Data":"e2772239be2aba04e0cf932ee9eac0fdde5fc598bb1b68c80773f2f79507dee8"} Jan 29 11:40:43 crc kubenswrapper[4766]: I0129 11:40:43.999336 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2772239be2aba04e0cf932ee9eac0fdde5fc598bb1b68c80773f2f79507dee8" Jan 29 11:40:44 crc kubenswrapper[4766]: I0129 11:40:43.999700 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.251356 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz"] Jan 29 11:40:52 crc kubenswrapper[4766]: E0129 11:40:52.252131 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252145 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" Jan 29 11:40:52 crc kubenswrapper[4766]: E0129 11:40:52.252155 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="pull" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252161 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="pull" Jan 29 11:40:52 crc kubenswrapper[4766]: E0129 11:40:52.252168 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="extract" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252176 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="extract" Jan 29 11:40:52 crc kubenswrapper[4766]: E0129 11:40:52.252195 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="util" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252200 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="util" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252291 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d54a74-7c01-406c-9b46-c2dd7df8fb9e" containerName="extract" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252301 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="569bc384-3b96-4207-8d46-5a27bf7f21cd" containerName="console" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.252708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.255111 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.255545 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-b4f44" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.255881 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.257097 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.263688 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz"] Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.264819 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.297943 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-webhook-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.298005 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49dm\" (UniqueName: \"kubernetes.io/projected/18955c8a-3096-4daa-8173-5d90205581b7-kube-api-access-r49dm\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.298038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-apiservice-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.398554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-apiservice-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.398625 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-webhook-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.398664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49dm\" (UniqueName: \"kubernetes.io/projected/18955c8a-3096-4daa-8173-5d90205581b7-kube-api-access-r49dm\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.405478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-apiservice-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.405981 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18955c8a-3096-4daa-8173-5d90205581b7-webhook-cert\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.413845 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49dm\" (UniqueName: \"kubernetes.io/projected/18955c8a-3096-4daa-8173-5d90205581b7-kube-api-access-r49dm\") pod \"metallb-operator-controller-manager-7776d7d99d-7t5gz\" (UID: \"18955c8a-3096-4daa-8173-5d90205581b7\") " pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.507920 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6"] Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.508804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.510662 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.515259 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8fmwj" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.515310 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.528006 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6"] Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.567939 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.702792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-apiservice-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.702835 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcdtt\" (UniqueName: \"kubernetes.io/projected/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-kube-api-access-bcdtt\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.703157 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-webhook-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.791074 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz"] Jan 29 11:40:52 crc kubenswrapper[4766]: W0129 11:40:52.801553 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18955c8a_3096_4daa_8173_5d90205581b7.slice/crio-9563b5ebb48c915b13b665bd73a3478b6e8ef99a23a5a9c564683b8df3f415d3 WatchSource:0}: Error finding container 9563b5ebb48c915b13b665bd73a3478b6e8ef99a23a5a9c564683b8df3f415d3: Status 404 returned error can't find the container with id 9563b5ebb48c915b13b665bd73a3478b6e8ef99a23a5a9c564683b8df3f415d3 Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.804471 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-webhook-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.804532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-apiservice-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.804556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcdtt\" (UniqueName: \"kubernetes.io/projected/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-kube-api-access-bcdtt\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.809324 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-apiservice-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.809345 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-webhook-cert\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.820663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcdtt\" (UniqueName: \"kubernetes.io/projected/41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b-kube-api-access-bcdtt\") pod \"metallb-operator-webhook-server-78b44f4d5f-h2wr6\" (UID: \"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b\") " pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:52 crc kubenswrapper[4766]: I0129 11:40:52.824304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:53 crc kubenswrapper[4766]: I0129 11:40:53.034256 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6"] Jan 29 11:40:53 crc kubenswrapper[4766]: W0129 11:40:53.045775 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ed541e_e7d7_4dfb_bfeb_5b7492fa1a0b.slice/crio-b81ceeabf4849cf9f2a5ad083c1254623253a629bc6275382591c1721bf6fb35 WatchSource:0}: Error finding container b81ceeabf4849cf9f2a5ad083c1254623253a629bc6275382591c1721bf6fb35: Status 404 returned error can't find the container with id b81ceeabf4849cf9f2a5ad083c1254623253a629bc6275382591c1721bf6fb35 Jan 29 11:40:53 crc kubenswrapper[4766]: I0129 11:40:53.047817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" event={"ID":"18955c8a-3096-4daa-8173-5d90205581b7","Type":"ContainerStarted","Data":"9563b5ebb48c915b13b665bd73a3478b6e8ef99a23a5a9c564683b8df3f415d3"} Jan 29 11:40:54 crc kubenswrapper[4766]: I0129 11:40:54.054948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" event={"ID":"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b","Type":"ContainerStarted","Data":"b81ceeabf4849cf9f2a5ad083c1254623253a629bc6275382591c1721bf6fb35"} Jan 29 11:40:56 crc kubenswrapper[4766]: I0129 11:40:56.068910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" event={"ID":"18955c8a-3096-4daa-8173-5d90205581b7","Type":"ContainerStarted","Data":"d31280cd0eb2d785ae14c14368ed965f5c237d6c2e6bf2ae5337e56e3c503846"} Jan 29 11:40:56 crc kubenswrapper[4766]: I0129 11:40:56.069260 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:40:56 crc kubenswrapper[4766]: I0129 11:40:56.097482 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" podStartSLOduration=1.621907222 podStartE2EDuration="4.097461278s" podCreationTimestamp="2026-01-29 11:40:52 +0000 UTC" firstStartedPulling="2026-01-29 11:40:52.804514798 +0000 UTC m=+1189.916907809" lastFinishedPulling="2026-01-29 11:40:55.280068854 +0000 UTC m=+1192.392461865" observedRunningTime="2026-01-29 11:40:56.091168719 +0000 UTC m=+1193.203561730" watchObservedRunningTime="2026-01-29 11:40:56.097461278 +0000 UTC m=+1193.209854289" Jan 29 11:40:59 crc kubenswrapper[4766]: I0129 11:40:59.097919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" event={"ID":"41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b","Type":"ContainerStarted","Data":"055e4cde62daf8b39be6ec12ee50688cb1c8601166367c486ec88892133c1ae3"} Jan 29 11:40:59 crc kubenswrapper[4766]: I0129 11:40:59.098559 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:40:59 crc kubenswrapper[4766]: I0129 11:40:59.118684 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" podStartSLOduration=1.884299207 podStartE2EDuration="7.118658376s" podCreationTimestamp="2026-01-29 11:40:52 +0000 UTC" firstStartedPulling="2026-01-29 11:40:53.049954521 +0000 UTC m=+1190.162347532" lastFinishedPulling="2026-01-29 11:40:58.28431369 +0000 UTC m=+1195.396706701" observedRunningTime="2026-01-29 11:40:59.115320831 +0000 UTC m=+1196.227713842" watchObservedRunningTime="2026-01-29 11:40:59.118658376 +0000 UTC m=+1196.231051397" Jan 29 11:41:12 crc kubenswrapper[4766]: I0129 11:41:12.831392 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78b44f4d5f-h2wr6" Jan 29 11:41:16 crc kubenswrapper[4766]: I0129 11:41:16.362047 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:41:16 crc kubenswrapper[4766]: I0129 11:41:16.362387 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:41:32 crc kubenswrapper[4766]: I0129 11:41:32.570788 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7776d7d99d-7t5gz" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.219335 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mnrnx"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.221404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.227001 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.227660 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zwxx7" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.232663 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.233381 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.235295 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.237106 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.268071 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.323428 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6rfrd"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.324348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.327401 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8x8fb" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.327677 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.327818 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.327916 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.346957 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-n8gpm"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.348426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.350506 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lw2\" (UniqueName: \"kubernetes.io/projected/865e99ee-6f4f-47b1-bd58-86910c5f3b83-kube-api-access-n9lw2\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.355668 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.355771 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-conf\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.355807 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-reloader\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.355897 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.356044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-sockets\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.356079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q9gm\" (UniqueName: \"kubernetes.io/projected/f683d2c1-09f9-488a-9361-44f876f7a61a-kube-api-access-6q9gm\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.356103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-startup\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.356132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics-certs\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.360225 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.362146 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n8gpm"] Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.457287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28dff331-a770-4c20-b111-608aad657cf7-metallb-excludel2\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.458018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxhs\" (UniqueName: \"kubernetes.io/projected/28dff331-a770-4c20-b111-608aad657cf7-kube-api-access-4sxhs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: E0129 11:41:33.458327 4766 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.458660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: E0129 11:41:33.458875 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert podName:865e99ee-6f4f-47b1-bd58-86910c5f3b83 nodeName:}" failed. No retries permitted until 2026-01-29 11:41:33.958396524 +0000 UTC m=+1231.070789535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert") pod "frr-k8s-webhook-server-7df86c4f6c-lfzb5" (UID: "865e99ee-6f4f-47b1-bd58-86910c5f3b83") : secret "frr-k8s-webhook-server-cert" not found Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.458923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-metrics-certs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.458977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-cert\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459056 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58djn\" (UniqueName: \"kubernetes.io/projected/05465246-85ae-41ab-8696-d92c3e8f1231-kube-api-access-58djn\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459107 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459148 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-sockets\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459188 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q9gm\" (UniqueName: \"kubernetes.io/projected/f683d2c1-09f9-488a-9361-44f876f7a61a-kube-api-access-6q9gm\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-startup\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-sockets\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.459840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics-certs\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460471 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-startup\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9lw2\" (UniqueName: \"kubernetes.io/projected/865e99ee-6f4f-47b1-bd58-86910c5f3b83-kube-api-access-n9lw2\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-metrics-certs\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-conf\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.460783 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-reloader\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.461026 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-reloader\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.461281 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.461610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f683d2c1-09f9-488a-9361-44f876f7a61a-frr-conf\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.472135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f683d2c1-09f9-488a-9361-44f876f7a61a-metrics-certs\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.486367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9lw2\" (UniqueName: \"kubernetes.io/projected/865e99ee-6f4f-47b1-bd58-86910c5f3b83-kube-api-access-n9lw2\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.489906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q9gm\" (UniqueName: \"kubernetes.io/projected/f683d2c1-09f9-488a-9361-44f876f7a61a-kube-api-access-6q9gm\") pod \"frr-k8s-mnrnx\" (UID: \"f683d2c1-09f9-488a-9361-44f876f7a61a\") " pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.540718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-metrics-certs\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28dff331-a770-4c20-b111-608aad657cf7-metallb-excludel2\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sxhs\" (UniqueName: \"kubernetes.io/projected/28dff331-a770-4c20-b111-608aad657cf7-kube-api-access-4sxhs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562437 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-metrics-certs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-cert\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562480 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58djn\" (UniqueName: \"kubernetes.io/projected/05465246-85ae-41ab-8696-d92c3e8f1231-kube-api-access-58djn\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.562501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: E0129 11:41:33.562806 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 11:41:33 crc kubenswrapper[4766]: E0129 11:41:33.562852 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist podName:28dff331-a770-4c20-b111-608aad657cf7 nodeName:}" failed. No retries permitted until 2026-01-29 11:41:34.062837813 +0000 UTC m=+1231.175230824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist") pod "speaker-6rfrd" (UID: "28dff331-a770-4c20-b111-608aad657cf7") : secret "metallb-memberlist" not found Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.564122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/28dff331-a770-4c20-b111-608aad657cf7-metallb-excludel2\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.565589 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.567069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-metrics-certs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.567337 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-metrics-certs\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.577663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05465246-85ae-41ab-8696-d92c3e8f1231-cert\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.580687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sxhs\" (UniqueName: \"kubernetes.io/projected/28dff331-a770-4c20-b111-608aad657cf7-kube-api-access-4sxhs\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.580896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58djn\" (UniqueName: \"kubernetes.io/projected/05465246-85ae-41ab-8696-d92c3e8f1231-kube-api-access-58djn\") pod \"controller-6968d8fdc4-n8gpm\" (UID: \"05465246-85ae-41ab-8696-d92c3e8f1231\") " pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.670237 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.968028 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:33 crc kubenswrapper[4766]: I0129 11:41:33.974785 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/865e99ee-6f4f-47b1-bd58-86910c5f3b83-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lfzb5\" (UID: \"865e99ee-6f4f-47b1-bd58-86910c5f3b83\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.060540 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n8gpm"] Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.069027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:34 crc kubenswrapper[4766]: E0129 11:41:34.069163 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 11:41:34 crc kubenswrapper[4766]: E0129 11:41:34.069257 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist podName:28dff331-a770-4c20-b111-608aad657cf7 nodeName:}" failed. No retries permitted until 2026-01-29 11:41:35.069236136 +0000 UTC m=+1232.181629147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist") pod "speaker-6rfrd" (UID: "28dff331-a770-4c20-b111-608aad657cf7") : secret "metallb-memberlist" not found Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.160137 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.304181 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"3129a88d8601a5aa7b4f73f9506a5984b8554ba73e5702768faeb473ee9218f3"} Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.307825 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n8gpm" event={"ID":"05465246-85ae-41ab-8696-d92c3e8f1231","Type":"ContainerStarted","Data":"2db23f9ec6aa2305af751bc46cba3a6b35cf3e484a62d462895d6ca117ee7bf1"} Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.307879 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n8gpm" event={"ID":"05465246-85ae-41ab-8696-d92c3e8f1231","Type":"ContainerStarted","Data":"21ece8b4877943948bf43ba48743d5c407428ca93cbf2585b11ee122b00f2b65"} Jan 29 11:41:34 crc kubenswrapper[4766]: I0129 11:41:34.369505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5"] Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.085823 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.107249 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/28dff331-a770-4c20-b111-608aad657cf7-memberlist\") pod \"speaker-6rfrd\" (UID: \"28dff331-a770-4c20-b111-608aad657cf7\") " pod="metallb-system/speaker-6rfrd" Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.142926 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6rfrd" Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.321888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n8gpm" event={"ID":"05465246-85ae-41ab-8696-d92c3e8f1231","Type":"ContainerStarted","Data":"40495a94727de7ea620205322a31f165911c0a041ad0fde25348bc7fae7494d3"} Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.322692 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.324239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" event={"ID":"865e99ee-6f4f-47b1-bd58-86910c5f3b83","Type":"ContainerStarted","Data":"2af707b9fa76e787fbb7b32160cd685d7209a017f5987deb730cf20cea71d527"} Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.325659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6rfrd" event={"ID":"28dff331-a770-4c20-b111-608aad657cf7","Type":"ContainerStarted","Data":"48c063c35584600da8ba03e2bbb18c48b9a5edb0a49b4fb469ada1c9738459bf"} Jan 29 11:41:35 crc kubenswrapper[4766]: I0129 11:41:35.337379 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-n8gpm" podStartSLOduration=2.337356092 podStartE2EDuration="2.337356092s" podCreationTimestamp="2026-01-29 11:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:41:35.337011022 +0000 UTC m=+1232.449404033" watchObservedRunningTime="2026-01-29 11:41:35.337356092 +0000 UTC m=+1232.449749123" Jan 29 11:41:36 crc kubenswrapper[4766]: I0129 11:41:36.344717 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6rfrd" event={"ID":"28dff331-a770-4c20-b111-608aad657cf7","Type":"ContainerStarted","Data":"a02f80e1e2585c22bb3c908ee2d7f37458b518c3c4021def45d3c63a0c0f8878"} Jan 29 11:41:36 crc kubenswrapper[4766]: I0129 11:41:36.345150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6rfrd" event={"ID":"28dff331-a770-4c20-b111-608aad657cf7","Type":"ContainerStarted","Data":"10d1c348d31f745c79f1436b1e17598c044479985ca3a47dfb8b1d0a48ea5c03"} Jan 29 11:41:36 crc kubenswrapper[4766]: I0129 11:41:36.385402 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6rfrd" podStartSLOduration=3.385377839 podStartE2EDuration="3.385377839s" podCreationTimestamp="2026-01-29 11:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:41:36.373561321 +0000 UTC m=+1233.485954342" watchObservedRunningTime="2026-01-29 11:41:36.385377839 +0000 UTC m=+1233.497770860" Jan 29 11:41:37 crc kubenswrapper[4766]: I0129 11:41:37.353044 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6rfrd" Jan 29 11:41:43 crc kubenswrapper[4766]: I0129 11:41:43.385218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" event={"ID":"865e99ee-6f4f-47b1-bd58-86910c5f3b83","Type":"ContainerStarted","Data":"c33fbcb5b37a3cb062ede12407ea4d1182052804af22cfec622f1c4702c8202c"} Jan 29 11:41:43 crc kubenswrapper[4766]: I0129 11:41:43.385629 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:43 crc kubenswrapper[4766]: I0129 11:41:43.386499 4766 generic.go:334] "Generic (PLEG): container finished" podID="f683d2c1-09f9-488a-9361-44f876f7a61a" containerID="6724280cc33d365d9323c142e769f95fb82fb6af36fc737695ab6c8bc82c8187" exitCode=0 Jan 29 11:41:43 crc kubenswrapper[4766]: I0129 11:41:43.386527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerDied","Data":"6724280cc33d365d9323c142e769f95fb82fb6af36fc737695ab6c8bc82c8187"} Jan 29 11:41:43 crc kubenswrapper[4766]: I0129 11:41:43.402919 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" podStartSLOduration=1.9489379919999998 podStartE2EDuration="10.402899141s" podCreationTimestamp="2026-01-29 11:41:33 +0000 UTC" firstStartedPulling="2026-01-29 11:41:34.37850003 +0000 UTC m=+1231.490893041" lastFinishedPulling="2026-01-29 11:41:42.832461179 +0000 UTC m=+1239.944854190" observedRunningTime="2026-01-29 11:41:43.39997555 +0000 UTC m=+1240.512368571" watchObservedRunningTime="2026-01-29 11:41:43.402899141 +0000 UTC m=+1240.515292172" Jan 29 11:41:44 crc kubenswrapper[4766]: I0129 11:41:44.397561 4766 generic.go:334] "Generic (PLEG): container finished" podID="f683d2c1-09f9-488a-9361-44f876f7a61a" containerID="e757a2c6230aba4f339f8a3b9524097b1b3b9378afe16244e20b3424af7f5dca" exitCode=0 Jan 29 11:41:44 crc kubenswrapper[4766]: I0129 11:41:44.397633 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerDied","Data":"e757a2c6230aba4f339f8a3b9524097b1b3b9378afe16244e20b3424af7f5dca"} Jan 29 11:41:45 crc kubenswrapper[4766]: I0129 11:41:45.147318 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6rfrd" Jan 29 11:41:45 crc kubenswrapper[4766]: I0129 11:41:45.405191 4766 generic.go:334] "Generic (PLEG): container finished" podID="f683d2c1-09f9-488a-9361-44f876f7a61a" containerID="2945a1b141eacf391b20f924cad8dc43fc11d96fbc2a8bdf8ddc66d29d4b8e6e" exitCode=0 Jan 29 11:41:45 crc kubenswrapper[4766]: I0129 11:41:45.405240 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerDied","Data":"2945a1b141eacf391b20f924cad8dc43fc11d96fbc2a8bdf8ddc66d29d4b8e6e"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.361684 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.362236 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.428567 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b"] Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.431210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.432375 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"e4495a6753ae5984b14508d25242ca1dc255f16899d0bd8085a29a93091e1174"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.432578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"d39e0fa4982ccf87abe4c3cbcd373d16f3b21700be87d2e72845234a8d075362"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.432681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"f9922d9617965db70a8148175017585c6c17cc0b9c788c588905c8c98fa1ca70"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.432801 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"bee7e1a5ea360b07f28661f3989d44e32b389b9455c856bc34f95802ecac88ce"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.432904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"b9ec8e252caaec0433bcdcebb8bc692010bc476d5fae771ae8afbbdbb4ad959f"} Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.436087 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.449201 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b"] Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.543388 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.543461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.543492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc8rt\" (UniqueName: \"kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.645010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.645088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.645123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc8rt\" (UniqueName: \"kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.645606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.645692 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.664903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc8rt\" (UniqueName: \"kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:46 crc kubenswrapper[4766]: I0129 11:41:46.775478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.172463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b"] Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.445818 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mnrnx" event={"ID":"f683d2c1-09f9-488a-9361-44f876f7a61a","Type":"ContainerStarted","Data":"f3cadd709956e1970dbe9b8d4cd3cefba81e3efd6ade8860eb4f2a796584c4e9"} Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.446634 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.447934 4766 generic.go:334] "Generic (PLEG): container finished" podID="498cad84-d6b7-4732-bdee-39dec01c2829" containerID="65b1d63fd03ffdb1ef65bfb47d9757f80ff4ab6f4577c559ffc074fe71f53df6" exitCode=0 Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.447982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" event={"ID":"498cad84-d6b7-4732-bdee-39dec01c2829","Type":"ContainerDied","Data":"65b1d63fd03ffdb1ef65bfb47d9757f80ff4ab6f4577c559ffc074fe71f53df6"} Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.448027 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" event={"ID":"498cad84-d6b7-4732-bdee-39dec01c2829","Type":"ContainerStarted","Data":"0ca6cd843a40049084df388ec3b13e8be070864065a6a9bec06ce5192a18b67e"} Jan 29 11:41:47 crc kubenswrapper[4766]: I0129 11:41:47.474238 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mnrnx" podStartSLOduration=5.304639906 podStartE2EDuration="14.474221205s" podCreationTimestamp="2026-01-29 11:41:33 +0000 UTC" firstStartedPulling="2026-01-29 11:41:33.68558966 +0000 UTC m=+1230.797982671" lastFinishedPulling="2026-01-29 11:41:42.855170959 +0000 UTC m=+1239.967563970" observedRunningTime="2026-01-29 11:41:47.470764699 +0000 UTC m=+1244.583157720" watchObservedRunningTime="2026-01-29 11:41:47.474221205 +0000 UTC m=+1244.586614216" Jan 29 11:41:48 crc kubenswrapper[4766]: I0129 11:41:48.541293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:48 crc kubenswrapper[4766]: I0129 11:41:48.579029 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:41:51 crc kubenswrapper[4766]: I0129 11:41:51.474862 4766 generic.go:334] "Generic (PLEG): container finished" podID="498cad84-d6b7-4732-bdee-39dec01c2829" containerID="8ad9f8f8e9a1c78f800134342cf272ac8286048b8df46f0e048502bff2914ee0" exitCode=0 Jan 29 11:41:51 crc kubenswrapper[4766]: I0129 11:41:51.475063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" event={"ID":"498cad84-d6b7-4732-bdee-39dec01c2829","Type":"ContainerDied","Data":"8ad9f8f8e9a1c78f800134342cf272ac8286048b8df46f0e048502bff2914ee0"} Jan 29 11:41:52 crc kubenswrapper[4766]: I0129 11:41:52.491403 4766 generic.go:334] "Generic (PLEG): container finished" podID="498cad84-d6b7-4732-bdee-39dec01c2829" containerID="8128f87b8ca5a0d9d085254027b3a2b5156e4886f36ac69d6668689dc2d5c6ff" exitCode=0 Jan 29 11:41:52 crc kubenswrapper[4766]: I0129 11:41:52.491448 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" event={"ID":"498cad84-d6b7-4732-bdee-39dec01c2829","Type":"ContainerDied","Data":"8128f87b8ca5a0d9d085254027b3a2b5156e4886f36ac69d6668689dc2d5c6ff"} Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.677026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-n8gpm" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.724066 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.750169 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util\") pod \"498cad84-d6b7-4732-bdee-39dec01c2829\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.759960 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util" (OuterVolumeSpecName: "util") pod "498cad84-d6b7-4732-bdee-39dec01c2829" (UID: "498cad84-d6b7-4732-bdee-39dec01c2829"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.851156 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc8rt\" (UniqueName: \"kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt\") pod \"498cad84-d6b7-4732-bdee-39dec01c2829\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.851216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle\") pod \"498cad84-d6b7-4732-bdee-39dec01c2829\" (UID: \"498cad84-d6b7-4732-bdee-39dec01c2829\") " Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.851526 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.852572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle" (OuterVolumeSpecName: "bundle") pod "498cad84-d6b7-4732-bdee-39dec01c2829" (UID: "498cad84-d6b7-4732-bdee-39dec01c2829"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.856842 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt" (OuterVolumeSpecName: "kube-api-access-dc8rt") pod "498cad84-d6b7-4732-bdee-39dec01c2829" (UID: "498cad84-d6b7-4732-bdee-39dec01c2829"). InnerVolumeSpecName "kube-api-access-dc8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.952656 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc8rt\" (UniqueName: \"kubernetes.io/projected/498cad84-d6b7-4732-bdee-39dec01c2829-kube-api-access-dc8rt\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:53 crc kubenswrapper[4766]: I0129 11:41:53.952685 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498cad84-d6b7-4732-bdee-39dec01c2829-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:54 crc kubenswrapper[4766]: I0129 11:41:54.167021 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lfzb5" Jan 29 11:41:54 crc kubenswrapper[4766]: I0129 11:41:54.505723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" event={"ID":"498cad84-d6b7-4732-bdee-39dec01c2829","Type":"ContainerDied","Data":"0ca6cd843a40049084df388ec3b13e8be070864065a6a9bec06ce5192a18b67e"} Jan 29 11:41:54 crc kubenswrapper[4766]: I0129 11:41:54.505771 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca6cd843a40049084df388ec3b13e8be070864065a6a9bec06ce5192a18b67e" Jan 29 11:41:54 crc kubenswrapper[4766]: I0129 11:41:54.505855 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.410607 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg"] Jan 29 11:41:59 crc kubenswrapper[4766]: E0129 11:41:59.411277 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="util" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.411290 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="util" Jan 29 11:41:59 crc kubenswrapper[4766]: E0129 11:41:59.411299 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="extract" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.411304 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="extract" Jan 29 11:41:59 crc kubenswrapper[4766]: E0129 11:41:59.411314 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="pull" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.411320 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="pull" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.411451 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="498cad84-d6b7-4732-bdee-39dec01c2829" containerName="extract" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.411898 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.413905 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.414118 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8xfnm" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.415946 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.430930 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg"] Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.519053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwg4\" (UniqueName: \"kubernetes.io/projected/a9bd0656-4a38-40d4-b85a-300a09b858db-kube-api-access-hxwg4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.519150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9bd0656-4a38-40d4-b85a-300a09b858db-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.620285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwg4\" (UniqueName: \"kubernetes.io/projected/a9bd0656-4a38-40d4-b85a-300a09b858db-kube-api-access-hxwg4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.620343 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9bd0656-4a38-40d4-b85a-300a09b858db-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.620826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9bd0656-4a38-40d4-b85a-300a09b858db-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.661176 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwg4\" (UniqueName: \"kubernetes.io/projected/a9bd0656-4a38-40d4-b85a-300a09b858db-kube-api-access-hxwg4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xfzbg\" (UID: \"a9bd0656-4a38-40d4-b85a-300a09b858db\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:41:59 crc kubenswrapper[4766]: I0129 11:41:59.727622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" Jan 29 11:42:00 crc kubenswrapper[4766]: I0129 11:42:00.482567 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg"] Jan 29 11:42:00 crc kubenswrapper[4766]: W0129 11:42:00.487815 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bd0656_4a38_40d4_b85a_300a09b858db.slice/crio-cc3e08cccbea217ad7845c1c7d0544ee64ea67e0fe752cdb1a8965a3924aa475 WatchSource:0}: Error finding container cc3e08cccbea217ad7845c1c7d0544ee64ea67e0fe752cdb1a8965a3924aa475: Status 404 returned error can't find the container with id cc3e08cccbea217ad7845c1c7d0544ee64ea67e0fe752cdb1a8965a3924aa475 Jan 29 11:42:00 crc kubenswrapper[4766]: I0129 11:42:00.538827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" event={"ID":"a9bd0656-4a38-40d4-b85a-300a09b858db","Type":"ContainerStarted","Data":"cc3e08cccbea217ad7845c1c7d0544ee64ea67e0fe752cdb1a8965a3924aa475"} Jan 29 11:42:03 crc kubenswrapper[4766]: I0129 11:42:03.544084 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mnrnx" Jan 29 11:42:04 crc kubenswrapper[4766]: I0129 11:42:04.569108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" event={"ID":"a9bd0656-4a38-40d4-b85a-300a09b858db","Type":"ContainerStarted","Data":"75d06af056ec9a8aa18131de5a3e8dce6d4793cae17e3b3ae3e387b329d248b5"} Jan 29 11:42:04 crc kubenswrapper[4766]: I0129 11:42:04.629988 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xfzbg" podStartSLOduration=2.484004288 podStartE2EDuration="5.62997161s" podCreationTimestamp="2026-01-29 11:41:59 +0000 UTC" firstStartedPulling="2026-01-29 11:42:00.490532485 +0000 UTC m=+1257.602925496" lastFinishedPulling="2026-01-29 11:42:03.636499807 +0000 UTC m=+1260.748892818" observedRunningTime="2026-01-29 11:42:04.609510322 +0000 UTC m=+1261.721903353" watchObservedRunningTime="2026-01-29 11:42:04.62997161 +0000 UTC m=+1261.742364621" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.260990 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6fb75"] Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.262884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.264425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wr4bb" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.264790 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.264987 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.269783 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6fb75"] Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.427162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.427447 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmct\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-kube-api-access-pvmct\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.529011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.529080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmct\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-kube-api-access-pvmct\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.551472 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmct\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-kube-api-access-pvmct\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.551937 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ec76af5-13ae-4c4e-8242-66f1df583a46-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6fb75\" (UID: \"1ec76af5-13ae-4c4e-8242-66f1df583a46\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:08 crc kubenswrapper[4766]: I0129 11:42:08.591376 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.023640 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6fb75"] Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.602844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" event={"ID":"1ec76af5-13ae-4c4e-8242-66f1df583a46","Type":"ContainerStarted","Data":"c708e8cc09cb051c506d55cfc7c1a69568ffe7828f1c7e0b1a5a47f1138a0f67"} Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.842371 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hgrv9"] Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.843704 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.865483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jg9rh" Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.868371 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hgrv9"] Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.945756 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzs5\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-kube-api-access-vhzs5\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:09 crc kubenswrapper[4766]: I0129 11:42:09.946248 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.047228 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhzs5\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-kube-api-access-vhzs5\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.047295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.066435 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.067090 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhzs5\" (UniqueName: \"kubernetes.io/projected/7b86c740-3495-4d4e-9205-2940c91abcb2-kube-api-access-vhzs5\") pod \"cert-manager-webhook-6888856db4-hgrv9\" (UID: \"7b86c740-3495-4d4e-9205-2940c91abcb2\") " pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.170812 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.580107 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hgrv9"] Jan 29 11:42:10 crc kubenswrapper[4766]: I0129 11:42:10.609684 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" event={"ID":"7b86c740-3495-4d4e-9205-2940c91abcb2","Type":"ContainerStarted","Data":"744b0e09afd0bf8227a19c799a6a5281b57faccaafe4ba277602804ee582b60d"} Jan 29 11:42:14 crc kubenswrapper[4766]: I0129 11:42:14.636464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" event={"ID":"7b86c740-3495-4d4e-9205-2940c91abcb2","Type":"ContainerStarted","Data":"b7c3f43cba558ae8102a020ba166bca19c9b6d4163f4c327c2f7247ba95a0445"} Jan 29 11:42:14 crc kubenswrapper[4766]: I0129 11:42:14.637094 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:14 crc kubenswrapper[4766]: I0129 11:42:14.638315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" event={"ID":"1ec76af5-13ae-4c4e-8242-66f1df583a46","Type":"ContainerStarted","Data":"b20b7b5f94c2e9e7bd37bf31b0190a9f8c2437adddaf6d26af03a368f1386aa6"} Jan 29 11:42:14 crc kubenswrapper[4766]: I0129 11:42:14.667852 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" podStartSLOduration=2.663666521 podStartE2EDuration="5.667834528s" podCreationTimestamp="2026-01-29 11:42:09 +0000 UTC" firstStartedPulling="2026-01-29 11:42:10.586152036 +0000 UTC m=+1267.698545047" lastFinishedPulling="2026-01-29 11:42:13.590320053 +0000 UTC m=+1270.702713054" observedRunningTime="2026-01-29 11:42:14.66250844 +0000 UTC m=+1271.774901451" watchObservedRunningTime="2026-01-29 11:42:14.667834528 +0000 UTC m=+1271.780227539" Jan 29 11:42:14 crc kubenswrapper[4766]: I0129 11:42:14.689602 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-6fb75" podStartSLOduration=2.15430832 podStartE2EDuration="6.689579371s" podCreationTimestamp="2026-01-29 11:42:08 +0000 UTC" firstStartedPulling="2026-01-29 11:42:09.031951391 +0000 UTC m=+1266.144344402" lastFinishedPulling="2026-01-29 11:42:13.567222442 +0000 UTC m=+1270.679615453" observedRunningTime="2026-01-29 11:42:14.684452929 +0000 UTC m=+1271.796845960" watchObservedRunningTime="2026-01-29 11:42:14.689579371 +0000 UTC m=+1271.801972392" Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.361588 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.362792 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.362915 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.363433 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.363587 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370" gracePeriod=600 Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.652469 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370" exitCode=0 Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.652544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370"} Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.652838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130"} Jan 29 11:42:16 crc kubenswrapper[4766]: I0129 11:42:16.652858 4766 scope.go:117] "RemoveContainer" containerID="6e6ca83c79b07ee253c2ead25709cdf0f2689e63dd55c9ddea37747adac17fa8" Jan 29 11:42:20 crc kubenswrapper[4766]: I0129 11:42:20.174321 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hgrv9" Jan 29 11:42:25 crc kubenswrapper[4766]: I0129 11:42:25.953326 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-fktjr"] Jan 29 11:42:25 crc kubenswrapper[4766]: I0129 11:42:25.955501 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:25 crc kubenswrapper[4766]: I0129 11:42:25.957615 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6cgwn" Jan 29 11:42:25 crc kubenswrapper[4766]: I0129 11:42:25.966474 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-fktjr"] Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.085358 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4fv\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-kube-api-access-4n4fv\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.085443 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-bound-sa-token\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.187011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4fv\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-kube-api-access-4n4fv\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.187293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-bound-sa-token\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.206808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4fv\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-kube-api-access-4n4fv\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.207136 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02af44f9-cf78-4a95-ac39-e6012cb5446a-bound-sa-token\") pod \"cert-manager-545d4d4674-fktjr\" (UID: \"02af44f9-cf78-4a95-ac39-e6012cb5446a\") " pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.272802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-fktjr" Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.682494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-fktjr"] Jan 29 11:42:26 crc kubenswrapper[4766]: W0129 11:42:26.688622 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02af44f9_cf78_4a95_ac39_e6012cb5446a.slice/crio-a5ef07bd3c54e528799d5402b0bf9e7637669e6d1e36808dfa7f608c1c7d9368 WatchSource:0}: Error finding container a5ef07bd3c54e528799d5402b0bf9e7637669e6d1e36808dfa7f608c1c7d9368: Status 404 returned error can't find the container with id a5ef07bd3c54e528799d5402b0bf9e7637669e6d1e36808dfa7f608c1c7d9368 Jan 29 11:42:26 crc kubenswrapper[4766]: I0129 11:42:26.710894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-fktjr" event={"ID":"02af44f9-cf78-4a95-ac39-e6012cb5446a","Type":"ContainerStarted","Data":"a5ef07bd3c54e528799d5402b0bf9e7637669e6d1e36808dfa7f608c1c7d9368"} Jan 29 11:42:27 crc kubenswrapper[4766]: I0129 11:42:27.719306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-fktjr" event={"ID":"02af44f9-cf78-4a95-ac39-e6012cb5446a","Type":"ContainerStarted","Data":"04e9ad4787903fea342c2f409246b2be05d82d8e00b07da5bd1c02c208011374"} Jan 29 11:42:27 crc kubenswrapper[4766]: I0129 11:42:27.734380 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-fktjr" podStartSLOduration=2.734363142 podStartE2EDuration="2.734363142s" podCreationTimestamp="2026-01-29 11:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:42:27.732665535 +0000 UTC m=+1284.845058556" watchObservedRunningTime="2026-01-29 11:42:27.734363142 +0000 UTC m=+1284.846756173" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.328148 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.329588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.332407 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7966t" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.332420 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.333626 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.339323 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.486668 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4lk\" (UniqueName: \"kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk\") pod \"openstack-operator-index-sx5ww\" (UID: \"2c045f01-bfe8-4333-bce0-05d2dcf487b9\") " pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.588050 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4lk\" (UniqueName: \"kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk\") pod \"openstack-operator-index-sx5ww\" (UID: \"2c045f01-bfe8-4333-bce0-05d2dcf487b9\") " pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.614484 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4lk\" (UniqueName: \"kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk\") pod \"openstack-operator-index-sx5ww\" (UID: \"2c045f01-bfe8-4333-bce0-05d2dcf487b9\") " pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:33 crc kubenswrapper[4766]: I0129 11:42:33.655832 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:34 crc kubenswrapper[4766]: I0129 11:42:34.073902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:34 crc kubenswrapper[4766]: W0129 11:42:34.080643 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c045f01_bfe8_4333_bce0_05d2dcf487b9.slice/crio-aced2bb9e07dd9c21d045103c4f8e8ff1fac588c712c99d83e807b3de185d1e9 WatchSource:0}: Error finding container aced2bb9e07dd9c21d045103c4f8e8ff1fac588c712c99d83e807b3de185d1e9: Status 404 returned error can't find the container with id aced2bb9e07dd9c21d045103c4f8e8ff1fac588c712c99d83e807b3de185d1e9 Jan 29 11:42:34 crc kubenswrapper[4766]: I0129 11:42:34.767343 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sx5ww" event={"ID":"2c045f01-bfe8-4333-bce0-05d2dcf487b9","Type":"ContainerStarted","Data":"aced2bb9e07dd9c21d045103c4f8e8ff1fac588c712c99d83e807b3de185d1e9"} Jan 29 11:42:36 crc kubenswrapper[4766]: I0129 11:42:36.690958 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.302727 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zg86v"] Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.304526 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.309856 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zg86v"] Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.439871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmzj\" (UniqueName: \"kubernetes.io/projected/8fe94dcf-0b49-4bba-b077-aff75fd5ae19-kube-api-access-mzmzj\") pod \"openstack-operator-index-zg86v\" (UID: \"8fe94dcf-0b49-4bba-b077-aff75fd5ae19\") " pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.541348 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmzj\" (UniqueName: \"kubernetes.io/projected/8fe94dcf-0b49-4bba-b077-aff75fd5ae19-kube-api-access-mzmzj\") pod \"openstack-operator-index-zg86v\" (UID: \"8fe94dcf-0b49-4bba-b077-aff75fd5ae19\") " pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.564028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmzj\" (UniqueName: \"kubernetes.io/projected/8fe94dcf-0b49-4bba-b077-aff75fd5ae19-kube-api-access-mzmzj\") pod \"openstack-operator-index-zg86v\" (UID: \"8fe94dcf-0b49-4bba-b077-aff75fd5ae19\") " pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.629944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.801311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sx5ww" event={"ID":"2c045f01-bfe8-4333-bce0-05d2dcf487b9","Type":"ContainerStarted","Data":"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4"} Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.801470 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sx5ww" podUID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" containerName="registry-server" containerID="cri-o://5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4" gracePeriod=2 Jan 29 11:42:37 crc kubenswrapper[4766]: I0129 11:42:37.822245 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sx5ww" podStartSLOduration=1.880133412 podStartE2EDuration="4.822229086s" podCreationTimestamp="2026-01-29 11:42:33 +0000 UTC" firstStartedPulling="2026-01-29 11:42:34.083504183 +0000 UTC m=+1291.195897194" lastFinishedPulling="2026-01-29 11:42:37.025599857 +0000 UTC m=+1294.137992868" observedRunningTime="2026-01-29 11:42:37.816123887 +0000 UTC m=+1294.928516908" watchObservedRunningTime="2026-01-29 11:42:37.822229086 +0000 UTC m=+1294.934622097" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.029660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zg86v"] Jan 29 11:42:38 crc kubenswrapper[4766]: W0129 11:42:38.042803 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe94dcf_0b49_4bba_b077_aff75fd5ae19.slice/crio-707781af2d714aafb26f1d545b8b07a77dbd006c07c44b289eba8cbc72a58c27 WatchSource:0}: Error finding container 707781af2d714aafb26f1d545b8b07a77dbd006c07c44b289eba8cbc72a58c27: Status 404 returned error can't find the container with id 707781af2d714aafb26f1d545b8b07a77dbd006c07c44b289eba8cbc72a58c27 Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.236646 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.350546 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h4lk\" (UniqueName: \"kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk\") pod \"2c045f01-bfe8-4333-bce0-05d2dcf487b9\" (UID: \"2c045f01-bfe8-4333-bce0-05d2dcf487b9\") " Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.355114 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk" (OuterVolumeSpecName: "kube-api-access-6h4lk") pod "2c045f01-bfe8-4333-bce0-05d2dcf487b9" (UID: "2c045f01-bfe8-4333-bce0-05d2dcf487b9"). InnerVolumeSpecName "kube-api-access-6h4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.451834 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h4lk\" (UniqueName: \"kubernetes.io/projected/2c045f01-bfe8-4333-bce0-05d2dcf487b9-kube-api-access-6h4lk\") on node \"crc\" DevicePath \"\"" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.809646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zg86v" event={"ID":"8fe94dcf-0b49-4bba-b077-aff75fd5ae19","Type":"ContainerStarted","Data":"fff3cb311ed1981f1f609fbf622c52176050d36f8b7ea2d68615b37d1cbc7778"} Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.809688 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zg86v" event={"ID":"8fe94dcf-0b49-4bba-b077-aff75fd5ae19","Type":"ContainerStarted","Data":"707781af2d714aafb26f1d545b8b07a77dbd006c07c44b289eba8cbc72a58c27"} Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.812139 4766 generic.go:334] "Generic (PLEG): container finished" podID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" containerID="5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4" exitCode=0 Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.812173 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sx5ww" event={"ID":"2c045f01-bfe8-4333-bce0-05d2dcf487b9","Type":"ContainerDied","Data":"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4"} Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.812222 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sx5ww" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.812251 4766 scope.go:117] "RemoveContainer" containerID="5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.812236 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sx5ww" event={"ID":"2c045f01-bfe8-4333-bce0-05d2dcf487b9","Type":"ContainerDied","Data":"aced2bb9e07dd9c21d045103c4f8e8ff1fac588c712c99d83e807b3de185d1e9"} Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.834836 4766 scope.go:117] "RemoveContainer" containerID="5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4" Jan 29 11:42:38 crc kubenswrapper[4766]: E0129 11:42:38.835237 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4\": container with ID starting with 5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4 not found: ID does not exist" containerID="5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.835287 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4"} err="failed to get container status \"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4\": rpc error: code = NotFound desc = could not find container \"5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4\": container with ID starting with 5dcdd593dee4474e368624ad7b70be821a4829eaee65a8002729d09057f0b2c4 not found: ID does not exist" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.845236 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zg86v" podStartSLOduration=1.808062886 podStartE2EDuration="1.845218447s" podCreationTimestamp="2026-01-29 11:42:37 +0000 UTC" firstStartedPulling="2026-01-29 11:42:38.04574072 +0000 UTC m=+1295.158133721" lastFinishedPulling="2026-01-29 11:42:38.082896271 +0000 UTC m=+1295.195289282" observedRunningTime="2026-01-29 11:42:38.831012463 +0000 UTC m=+1295.943405484" watchObservedRunningTime="2026-01-29 11:42:38.845218447 +0000 UTC m=+1295.957611458" Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.850115 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:38 crc kubenswrapper[4766]: I0129 11:42:38.853754 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sx5ww"] Jan 29 11:42:39 crc kubenswrapper[4766]: I0129 11:42:39.234711 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" path="/var/lib/kubelet/pods/2c045f01-bfe8-4333-bce0-05d2dcf487b9/volumes" Jan 29 11:42:47 crc kubenswrapper[4766]: I0129 11:42:47.630259 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:47 crc kubenswrapper[4766]: I0129 11:42:47.631821 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:47 crc kubenswrapper[4766]: I0129 11:42:47.656550 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:47 crc kubenswrapper[4766]: I0129 11:42:47.892390 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zg86v" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.071071 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56"] Jan 29 11:42:56 crc kubenswrapper[4766]: E0129 11:42:56.071930 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" containerName="registry-server" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.071946 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" containerName="registry-server" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.072072 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c045f01-bfe8-4333-bce0-05d2dcf487b9" containerName="registry-server" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.073208 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.078192 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hlkhq" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.093204 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56"] Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.116117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsccr\" (UniqueName: \"kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.116202 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.116244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.217632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.217688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.217759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsccr\" (UniqueName: \"kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.218225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.218350 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.237127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsccr\" (UniqueName: \"kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr\") pod \"a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.392189 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.602358 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56"] Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.926540 4766 generic.go:334] "Generic (PLEG): container finished" podID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerID="c52f65dbbe219d30aeac90f9ea2391885d1370bf76b83b109597a55e1132016a" exitCode=0 Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.926597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" event={"ID":"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59","Type":"ContainerDied","Data":"c52f65dbbe219d30aeac90f9ea2391885d1370bf76b83b109597a55e1132016a"} Jan 29 11:42:56 crc kubenswrapper[4766]: I0129 11:42:56.926654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" event={"ID":"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59","Type":"ContainerStarted","Data":"99b899852b9e08c07d548cd616553253ba2272f1928fe6db3a974acf9f13aa35"} Jan 29 11:42:57 crc kubenswrapper[4766]: I0129 11:42:57.940182 4766 generic.go:334] "Generic (PLEG): container finished" podID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerID="b541f1c886ad2ba128f00f5a77e25b234878e38643d69b688bdba210725c714d" exitCode=0 Jan 29 11:42:57 crc kubenswrapper[4766]: I0129 11:42:57.940265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" event={"ID":"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59","Type":"ContainerDied","Data":"b541f1c886ad2ba128f00f5a77e25b234878e38643d69b688bdba210725c714d"} Jan 29 11:42:58 crc kubenswrapper[4766]: I0129 11:42:58.947272 4766 generic.go:334] "Generic (PLEG): container finished" podID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerID="721f000039a3009fe08df1893df647e7fb8a7b21efff71efddcb0f9d2f190caa" exitCode=0 Jan 29 11:42:58 crc kubenswrapper[4766]: I0129 11:42:58.947442 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" event={"ID":"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59","Type":"ContainerDied","Data":"721f000039a3009fe08df1893df647e7fb8a7b21efff71efddcb0f9d2f190caa"} Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.170956 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.288277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle\") pod \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.288367 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsccr\" (UniqueName: \"kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr\") pod \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.288402 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util\") pod \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\" (UID: \"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59\") " Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.290289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle" (OuterVolumeSpecName: "bundle") pod "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" (UID: "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.294705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr" (OuterVolumeSpecName: "kube-api-access-nsccr") pod "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" (UID: "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59"). InnerVolumeSpecName "kube-api-access-nsccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.312086 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util" (OuterVolumeSpecName: "util") pod "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" (UID: "cb2e1ea2-2471-4e0d-93ac-36ac457e1d59"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.390970 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsccr\" (UniqueName: \"kubernetes.io/projected/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-kube-api-access-nsccr\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.391108 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.391134 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb2e1ea2-2471-4e0d-93ac-36ac457e1d59-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.963334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" event={"ID":"cb2e1ea2-2471-4e0d-93ac-36ac457e1d59","Type":"ContainerDied","Data":"99b899852b9e08c07d548cd616553253ba2272f1928fe6db3a974acf9f13aa35"} Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.963378 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b899852b9e08c07d548cd616553253ba2272f1928fe6db3a974acf9f13aa35" Jan 29 11:43:00 crc kubenswrapper[4766]: I0129 11:43:00.963385 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.368663 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs"] Jan 29 11:43:03 crc kubenswrapper[4766]: E0129 11:43:03.369521 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="util" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.369535 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="util" Jan 29 11:43:03 crc kubenswrapper[4766]: E0129 11:43:03.369545 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="pull" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.369556 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="pull" Jan 29 11:43:03 crc kubenswrapper[4766]: E0129 11:43:03.369576 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="extract" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.369584 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="extract" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.369766 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2e1ea2-2471-4e0d-93ac-36ac457e1d59" containerName="extract" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.370310 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.378975 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mft2t" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.391431 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs"] Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.432496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4n4\" (UniqueName: \"kubernetes.io/projected/e5e2bd8b-a38a-406c-b237-ff9e369d107c-kube-api-access-nk4n4\") pod \"openstack-operator-controller-init-f45dc54dc-g5pbs\" (UID: \"e5e2bd8b-a38a-406c-b237-ff9e369d107c\") " pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.534237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4n4\" (UniqueName: \"kubernetes.io/projected/e5e2bd8b-a38a-406c-b237-ff9e369d107c-kube-api-access-nk4n4\") pod \"openstack-operator-controller-init-f45dc54dc-g5pbs\" (UID: \"e5e2bd8b-a38a-406c-b237-ff9e369d107c\") " pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.558779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4n4\" (UniqueName: \"kubernetes.io/projected/e5e2bd8b-a38a-406c-b237-ff9e369d107c-kube-api-access-nk4n4\") pod \"openstack-operator-controller-init-f45dc54dc-g5pbs\" (UID: \"e5e2bd8b-a38a-406c-b237-ff9e369d107c\") " pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.703669 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:03 crc kubenswrapper[4766]: I0129 11:43:03.973190 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs"] Jan 29 11:43:04 crc kubenswrapper[4766]: I0129 11:43:04.992534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" event={"ID":"e5e2bd8b-a38a-406c-b237-ff9e369d107c","Type":"ContainerStarted","Data":"91ac8424d7de124a87714838276a15a3f506c507851cad2bff6c37b56676d95b"} Jan 29 11:43:08 crc kubenswrapper[4766]: I0129 11:43:08.011967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" event={"ID":"e5e2bd8b-a38a-406c-b237-ff9e369d107c","Type":"ContainerStarted","Data":"ccfe2f05b104714392240b371ada63ee73734528563dba2e595c189dc115aa70"} Jan 29 11:43:08 crc kubenswrapper[4766]: I0129 11:43:08.012387 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:08 crc kubenswrapper[4766]: I0129 11:43:08.034996 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" podStartSLOduration=1.466013039 podStartE2EDuration="5.034980121s" podCreationTimestamp="2026-01-29 11:43:03 +0000 UTC" firstStartedPulling="2026-01-29 11:43:04.000208251 +0000 UTC m=+1321.112601272" lastFinishedPulling="2026-01-29 11:43:07.569175353 +0000 UTC m=+1324.681568354" observedRunningTime="2026-01-29 11:43:08.034307922 +0000 UTC m=+1325.146700943" watchObservedRunningTime="2026-01-29 11:43:08.034980121 +0000 UTC m=+1325.147373132" Jan 29 11:43:13 crc kubenswrapper[4766]: I0129 11:43:13.706971 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f45dc54dc-g5pbs" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.062241 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.063831 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.065335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9sdk7" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.076151 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.083665 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.094565 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.097301 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.097637 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jzwnp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.098198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.101290 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lrknq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.107462 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.123848 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.136766 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.137572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.143642 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nggxk" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.159235 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.160031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.164845 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j4s5f" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.174083 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.184077 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.185670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.188465 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tsklw" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.207114 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.214163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.221624 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.218164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7npzt\" (UniqueName: \"kubernetes.io/projected/ab5afb01-eba4-4480-a437-4d2e0cdb16bb-kube-api-access-7npzt\") pod \"glance-operator-controller-manager-8886f4c47-rgj67\" (UID: \"ab5afb01-eba4-4480-a437-4d2e0cdb16bb\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.222272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g27x\" (UniqueName: \"kubernetes.io/projected/0f72b13e-c9ce-4ada-ace2-432d17b8784e-kube-api-access-5g27x\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hn2zr\" (UID: \"0f72b13e-c9ce-4ada-ace2-432d17b8784e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.222391 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/7d553af8-9c25-432a-bb68-5402fbd6221e-kube-api-access-wshbc\") pod \"cinder-operator-controller-manager-8d874c8fc-zqq8r\" (UID: \"7d553af8-9c25-432a-bb68-5402fbd6221e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.222527 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72jp\" (UniqueName: \"kubernetes.io/projected/767a94c6-6767-4dc9-9054-70945f39e248-kube-api-access-r72jp\") pod \"heat-operator-controller-manager-69d6db494d-zw2qx\" (UID: \"767a94c6-6767-4dc9-9054-70945f39e248\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.222569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hms26\" (UniqueName: \"kubernetes.io/projected/19efe92b-6dae-4b62-920b-0348877b5217-kube-api-access-hms26\") pod \"designate-operator-controller-manager-6d9697b7f4-4j7sq\" (UID: \"19efe92b-6dae-4b62-920b-0348877b5217\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.222752 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.224622 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jtjxx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.236012 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jhhql"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.236823 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.240647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.246524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kxzlg" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.246844 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.259704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jhhql"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.276479 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.278933 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.282945 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p4hc2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.290009 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.290936 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.297069 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mfjpf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.309474 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.310970 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.312958 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wdlc7" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g27x\" (UniqueName: \"kubernetes.io/projected/0f72b13e-c9ce-4ada-ace2-432d17b8784e-kube-api-access-5g27x\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hn2zr\" (UID: \"0f72b13e-c9ce-4ada-ace2-432d17b8784e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnwc\" (UniqueName: \"kubernetes.io/projected/c2f7549b-08ae-4ec0-96ac-25997e35d30e-kube-api-access-jcnwc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-c5gtf\" (UID: \"c2f7549b-08ae-4ec0-96ac-25997e35d30e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcrh\" (UniqueName: \"kubernetes.io/projected/8e6fc747-e7e2-438d-a00e-3ab94b806035-kube-api-access-nwcrh\") pod \"horizon-operator-controller-manager-5fb775575f-gwttp\" (UID: \"8e6fc747-e7e2-438d-a00e-3ab94b806035\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/7d553af8-9c25-432a-bb68-5402fbd6221e-kube-api-access-wshbc\") pod \"cinder-operator-controller-manager-8d874c8fc-zqq8r\" (UID: \"7d553af8-9c25-432a-bb68-5402fbd6221e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zqp\" (UniqueName: \"kubernetes.io/projected/fa222cda-3f2c-49fb-9d14-466dce8c9c40-kube-api-access-s2zqp\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72jp\" (UniqueName: \"kubernetes.io/projected/767a94c6-6767-4dc9-9054-70945f39e248-kube-api-access-r72jp\") pod \"heat-operator-controller-manager-69d6db494d-zw2qx\" (UID: \"767a94c6-6767-4dc9-9054-70945f39e248\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323432 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hms26\" (UniqueName: \"kubernetes.io/projected/19efe92b-6dae-4b62-920b-0348877b5217-kube-api-access-hms26\") pod \"designate-operator-controller-manager-6d9697b7f4-4j7sq\" (UID: \"19efe92b-6dae-4b62-920b-0348877b5217\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7npzt\" (UniqueName: \"kubernetes.io/projected/ab5afb01-eba4-4480-a437-4d2e0cdb16bb-kube-api-access-7npzt\") pod \"glance-operator-controller-manager-8886f4c47-rgj67\" (UID: \"ab5afb01-eba4-4480-a437-4d2e0cdb16bb\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.323496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.336338 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.358798 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.363221 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.363029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7npzt\" (UniqueName: \"kubernetes.io/projected/ab5afb01-eba4-4480-a437-4d2e0cdb16bb-kube-api-access-7npzt\") pod \"glance-operator-controller-manager-8886f4c47-rgj67\" (UID: \"ab5afb01-eba4-4480-a437-4d2e0cdb16bb\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.371543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/7d553af8-9c25-432a-bb68-5402fbd6221e-kube-api-access-wshbc\") pod \"cinder-operator-controller-manager-8d874c8fc-zqq8r\" (UID: \"7d553af8-9c25-432a-bb68-5402fbd6221e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.381869 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hms26\" (UniqueName: \"kubernetes.io/projected/19efe92b-6dae-4b62-920b-0348877b5217-kube-api-access-hms26\") pod \"designate-operator-controller-manager-6d9697b7f4-4j7sq\" (UID: \"19efe92b-6dae-4b62-920b-0348877b5217\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.382541 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g27x\" (UniqueName: \"kubernetes.io/projected/0f72b13e-c9ce-4ada-ace2-432d17b8784e-kube-api-access-5g27x\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hn2zr\" (UID: \"0f72b13e-c9ce-4ada-ace2-432d17b8784e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.387958 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.391402 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.417013 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.419121 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.428380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.429894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72jp\" (UniqueName: \"kubernetes.io/projected/767a94c6-6767-4dc9-9054-70945f39e248-kube-api-access-r72jp\") pod \"heat-operator-controller-manager-69d6db494d-zw2qx\" (UID: \"767a94c6-6767-4dc9-9054-70945f39e248\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.431484 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnwc\" (UniqueName: \"kubernetes.io/projected/c2f7549b-08ae-4ec0-96ac-25997e35d30e-kube-api-access-jcnwc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-c5gtf\" (UID: \"c2f7549b-08ae-4ec0-96ac-25997e35d30e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.431580 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwcrh\" (UniqueName: \"kubernetes.io/projected/8e6fc747-e7e2-438d-a00e-3ab94b806035-kube-api-access-nwcrh\") pod \"horizon-operator-controller-manager-5fb775575f-gwttp\" (UID: \"8e6fc747-e7e2-438d-a00e-3ab94b806035\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.431644 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x55r\" (UniqueName: \"kubernetes.io/projected/a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6-kube-api-access-4x55r\") pod \"manila-operator-controller-manager-7dd968899f-kbqkb\" (UID: \"a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.431730 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zqp\" (UniqueName: \"kubernetes.io/projected/fa222cda-3f2c-49fb-9d14-466dce8c9c40-kube-api-access-s2zqp\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.432353 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87858\" (UniqueName: \"kubernetes.io/projected/d93f68fe-b726-4e2d-afa4-9b789a96dc55-kube-api-access-87858\") pod \"mariadb-operator-controller-manager-67bf948998-gpc9d\" (UID: \"d93f68fe-b726-4e2d-afa4-9b789a96dc55\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.432400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.432482 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxgj\" (UniqueName: \"kubernetes.io/projected/c98f5447-fb23-4b08-b5a8-70bce28d9bb7-kube-api-access-qcxgj\") pod \"keystone-operator-controller-manager-84f48565d4-cb57l\" (UID: \"c98f5447-fb23-4b08-b5a8-70bce28d9bb7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.432726 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.432797 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:31.932775535 +0000 UTC m=+1349.045168546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.440814 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-m29dp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.473560 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-nn778"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.474683 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.479856 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.483707 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dnx4b" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.487499 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zqp\" (UniqueName: \"kubernetes.io/projected/fa222cda-3f2c-49fb-9d14-466dce8c9c40-kube-api-access-s2zqp\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.487757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwcrh\" (UniqueName: \"kubernetes.io/projected/8e6fc747-e7e2-438d-a00e-3ab94b806035-kube-api-access-nwcrh\") pod \"horizon-operator-controller-manager-5fb775575f-gwttp\" (UID: \"8e6fc747-e7e2-438d-a00e-3ab94b806035\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.491469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.516425 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnwc\" (UniqueName: \"kubernetes.io/projected/c2f7549b-08ae-4ec0-96ac-25997e35d30e-kube-api-access-jcnwc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-c5gtf\" (UID: \"c2f7549b-08ae-4ec0-96ac-25997e35d30e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.516512 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.517549 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.518202 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.520535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xw4kr" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.533874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x55r\" (UniqueName: \"kubernetes.io/projected/a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6-kube-api-access-4x55r\") pod \"manila-operator-controller-manager-7dd968899f-kbqkb\" (UID: \"a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.533925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng86c\" (UniqueName: \"kubernetes.io/projected/38e05981-7669-4ee4-af1a-8ba826587cda-kube-api-access-ng86c\") pod \"nova-operator-controller-manager-55bff696bd-nn778\" (UID: \"38e05981-7669-4ee4-af1a-8ba826587cda\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.533971 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws46n\" (UniqueName: \"kubernetes.io/projected/a3ac13ec-bcf6-40f8-be96-d4302334f324-kube-api-access-ws46n\") pod \"neutron-operator-controller-manager-585dbc889-g5nzh\" (UID: \"a3ac13ec-bcf6-40f8-be96-d4302334f324\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.534023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87858\" (UniqueName: \"kubernetes.io/projected/d93f68fe-b726-4e2d-afa4-9b789a96dc55-kube-api-access-87858\") pod \"mariadb-operator-controller-manager-67bf948998-gpc9d\" (UID: \"d93f68fe-b726-4e2d-afa4-9b789a96dc55\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.534072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxgj\" (UniqueName: \"kubernetes.io/projected/c98f5447-fb23-4b08-b5a8-70bce28d9bb7-kube-api-access-qcxgj\") pod \"keystone-operator-controller-manager-84f48565d4-cb57l\" (UID: \"c98f5447-fb23-4b08-b5a8-70bce28d9bb7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.545186 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-nn778"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.557495 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxgj\" (UniqueName: \"kubernetes.io/projected/c98f5447-fb23-4b08-b5a8-70bce28d9bb7-kube-api-access-qcxgj\") pod \"keystone-operator-controller-manager-84f48565d4-cb57l\" (UID: \"c98f5447-fb23-4b08-b5a8-70bce28d9bb7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.557830 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.563111 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.569121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x55r\" (UniqueName: \"kubernetes.io/projected/a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6-kube-api-access-4x55r\") pod \"manila-operator-controller-manager-7dd968899f-kbqkb\" (UID: \"a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.578494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.582397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87858\" (UniqueName: \"kubernetes.io/projected/d93f68fe-b726-4e2d-afa4-9b789a96dc55-kube-api-access-87858\") pod \"mariadb-operator-controller-manager-67bf948998-gpc9d\" (UID: \"d93f68fe-b726-4e2d-afa4-9b789a96dc55\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.599652 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.601010 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.603675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8qsc8" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.611208 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.636385 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrkt\" (UniqueName: \"kubernetes.io/projected/dbcef236-480a-41a2-8462-4695dc762ed1-kube-api-access-znrkt\") pod \"ovn-operator-controller-manager-788c46999f-bcpmv\" (UID: \"dbcef236-480a-41a2-8462-4695dc762ed1\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.636458 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8nd\" (UniqueName: \"kubernetes.io/projected/16096f77-0fe2-498f-8b86-480d699b9fd6-kube-api-access-lk8nd\") pod \"octavia-operator-controller-manager-6687f8d877-lc5rh\" (UID: \"16096f77-0fe2-498f-8b86-480d699b9fd6\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.636500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng86c\" (UniqueName: \"kubernetes.io/projected/38e05981-7669-4ee4-af1a-8ba826587cda-kube-api-access-ng86c\") pod \"nova-operator-controller-manager-55bff696bd-nn778\" (UID: \"38e05981-7669-4ee4-af1a-8ba826587cda\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.636549 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws46n\" (UniqueName: \"kubernetes.io/projected/a3ac13ec-bcf6-40f8-be96-d4302334f324-kube-api-access-ws46n\") pod \"neutron-operator-controller-manager-585dbc889-g5nzh\" (UID: \"a3ac13ec-bcf6-40f8-be96-d4302334f324\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.638185 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.642102 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.657092 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.660088 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.664803 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.668332 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.669061 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.669107 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.669451 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zxrgg" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.669787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.671538 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.672304 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zph4j" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.673141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng86c\" (UniqueName: \"kubernetes.io/projected/38e05981-7669-4ee4-af1a-8ba826587cda-kube-api-access-ng86c\") pod \"nova-operator-controller-manager-55bff696bd-nn778\" (UID: \"38e05981-7669-4ee4-af1a-8ba826587cda\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.691401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws46n\" (UniqueName: \"kubernetes.io/projected/a3ac13ec-bcf6-40f8-be96-d4302334f324-kube-api-access-ws46n\") pod \"neutron-operator-controller-manager-585dbc889-g5nzh\" (UID: \"a3ac13ec-bcf6-40f8-be96-d4302334f324\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.704342 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kcl6h" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.731457 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v649r\" (UniqueName: \"kubernetes.io/projected/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-kube-api-access-v649r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrkt\" (UniqueName: \"kubernetes.io/projected/dbcef236-480a-41a2-8462-4695dc762ed1-kube-api-access-znrkt\") pod \"ovn-operator-controller-manager-788c46999f-bcpmv\" (UID: \"dbcef236-480a-41a2-8462-4695dc762ed1\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8nd\" (UniqueName: \"kubernetes.io/projected/16096f77-0fe2-498f-8b86-480d699b9fd6-kube-api-access-lk8nd\") pod \"octavia-operator-controller-manager-6687f8d877-lc5rh\" (UID: \"16096f77-0fe2-498f-8b86-480d699b9fd6\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bmr\" (UniqueName: \"kubernetes.io/projected/215d318f-0aac-4fa1-9d80-c162d3922e62-kube-api-access-b4bmr\") pod \"swift-operator-controller-manager-68fc8c869-8xvmt\" (UID: \"215d318f-0aac-4fa1-9d80-c162d3922e62\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737556 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nt7c\" (UniqueName: \"kubernetes.io/projected/eed499c2-b0c1-4fbb-b0e6-543d9f1ac230-kube-api-access-8nt7c\") pod \"placement-operator-controller-manager-5b964cf4cd-gth8s\" (UID: \"eed499c2-b0c1-4fbb-b0e6-543d9f1ac230\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.737614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.741373 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.765319 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.778883 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8nd\" (UniqueName: \"kubernetes.io/projected/16096f77-0fe2-498f-8b86-480d699b9fd6-kube-api-access-lk8nd\") pod \"octavia-operator-controller-manager-6687f8d877-lc5rh\" (UID: \"16096f77-0fe2-498f-8b86-480d699b9fd6\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.801547 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.802463 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrkt\" (UniqueName: \"kubernetes.io/projected/dbcef236-480a-41a2-8462-4695dc762ed1-kube-api-access-znrkt\") pod \"ovn-operator-controller-manager-788c46999f-bcpmv\" (UID: \"dbcef236-480a-41a2-8462-4695dc762ed1\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.802509 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.805540 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m8d2m" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.834263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.834924 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.842400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4bmr\" (UniqueName: \"kubernetes.io/projected/215d318f-0aac-4fa1-9d80-c162d3922e62-kube-api-access-b4bmr\") pod \"swift-operator-controller-manager-68fc8c869-8xvmt\" (UID: \"215d318f-0aac-4fa1-9d80-c162d3922e62\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.842470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt7c\" (UniqueName: \"kubernetes.io/projected/eed499c2-b0c1-4fbb-b0e6-543d9f1ac230-kube-api-access-8nt7c\") pod \"placement-operator-controller-manager-5b964cf4cd-gth8s\" (UID: \"eed499c2-b0c1-4fbb-b0e6-543d9f1ac230\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.842506 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2m47\" (UniqueName: \"kubernetes.io/projected/1d209024-1c33-4b4e-af1c-71c6039e69c9-kube-api-access-q2m47\") pod \"telemetry-operator-controller-manager-64b5b76f97-w7n2r\" (UID: \"1d209024-1c33-4b4e-af1c-71c6039e69c9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.845259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.845447 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v649r\" (UniqueName: \"kubernetes.io/projected/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-kube-api-access-v649r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.846010 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.846058 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:32.346043365 +0000 UTC m=+1349.458436366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.850480 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.851879 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.860651 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.870154 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-t8vrs" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.875206 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.884039 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zgjhb"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.884888 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.894756 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zgjhb"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.896351 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9mndx" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.899655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4bmr\" (UniqueName: \"kubernetes.io/projected/215d318f-0aac-4fa1-9d80-c162d3922e62-kube-api-access-b4bmr\") pod \"swift-operator-controller-manager-68fc8c869-8xvmt\" (UID: \"215d318f-0aac-4fa1-9d80-c162d3922e62\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.904432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v649r\" (UniqueName: \"kubernetes.io/projected/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-kube-api-access-v649r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.910333 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt7c\" (UniqueName: \"kubernetes.io/projected/eed499c2-b0c1-4fbb-b0e6-543d9f1ac230-kube-api-access-8nt7c\") pod \"placement-operator-controller-manager-5b964cf4cd-gth8s\" (UID: \"eed499c2-b0c1-4fbb-b0e6-543d9f1ac230\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.914848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.938755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.945954 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp"] Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.964044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9sd2\" (UniqueName: \"kubernetes.io/projected/248cfd14-922e-4123-9a39-849d292613f0-kube-api-access-g9sd2\") pod \"watcher-operator-controller-manager-564965969-zgjhb\" (UID: \"248cfd14-922e-4123-9a39-849d292613f0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.964118 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2m47\" (UniqueName: \"kubernetes.io/projected/1d209024-1c33-4b4e-af1c-71c6039e69c9-kube-api-access-q2m47\") pod \"telemetry-operator-controller-manager-64b5b76f97-w7n2r\" (UID: \"1d209024-1c33-4b4e-af1c-71c6039e69c9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.964190 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.964223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q87j6\" (UniqueName: \"kubernetes.io/projected/cacfa742-e2bf-48f3-8da2-3a6f7d66f60e-kube-api-access-q87j6\") pod \"test-operator-controller-manager-56f8bfcd9f-6snl2\" (UID: \"cacfa742-e2bf-48f3-8da2-3a6f7d66f60e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.964696 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: E0129 11:43:31.964753 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:32.964734988 +0000 UTC m=+1350.077127989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.966594 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.973495 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.980665 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cgx9x" Jan 29 11:43:31 crc kubenswrapper[4766]: I0129 11:43:31.980931 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.009143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2m47\" (UniqueName: \"kubernetes.io/projected/1d209024-1c33-4b4e-af1c-71c6039e69c9-kube-api-access-q2m47\") pod \"telemetry-operator-controller-manager-64b5b76f97-w7n2r\" (UID: \"1d209024-1c33-4b4e-af1c-71c6039e69c9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.037101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.054970 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.056055 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.059397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rrwrd" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.059789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.065486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.066263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.066345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q87j6\" (UniqueName: \"kubernetes.io/projected/cacfa742-e2bf-48f3-8da2-3a6f7d66f60e-kube-api-access-q87j6\") pod \"test-operator-controller-manager-56f8bfcd9f-6snl2\" (UID: \"cacfa742-e2bf-48f3-8da2-3a6f7d66f60e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.066396 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx876\" (UniqueName: \"kubernetes.io/projected/bd3e0529-a99d-4174-a9b1-7a937bf09579-kube-api-access-wx876\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.066463 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.066494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9sd2\" (UniqueName: \"kubernetes.io/projected/248cfd14-922e-4123-9a39-849d292613f0-kube-api-access-g9sd2\") pod \"watcher-operator-controller-manager-564965969-zgjhb\" (UID: \"248cfd14-922e-4123-9a39-849d292613f0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.076317 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.085319 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9sd2\" (UniqueName: \"kubernetes.io/projected/248cfd14-922e-4123-9a39-849d292613f0-kube-api-access-g9sd2\") pod \"watcher-operator-controller-manager-564965969-zgjhb\" (UID: \"248cfd14-922e-4123-9a39-849d292613f0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.091873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q87j6\" (UniqueName: \"kubernetes.io/projected/cacfa742-e2bf-48f3-8da2-3a6f7d66f60e-kube-api-access-q87j6\") pod \"test-operator-controller-manager-56f8bfcd9f-6snl2\" (UID: \"cacfa742-e2bf-48f3-8da2-3a6f7d66f60e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.092543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.099883 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.117139 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.155672 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.171229 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx876\" (UniqueName: \"kubernetes.io/projected/bd3e0529-a99d-4174-a9b1-7a937bf09579-kube-api-access-wx876\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.171357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.171506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.171563 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvrb\" (UniqueName: \"kubernetes.io/projected/df93cd48-3695-40d8-a9e5-7321f57034ed-kube-api-access-ktvrb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2xvf8\" (UID: \"df93cd48-3695-40d8-a9e5-7321f57034ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.172153 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.172226 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:32.672208416 +0000 UTC m=+1349.784601497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.172376 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.172438 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:32.672397671 +0000 UTC m=+1349.784790682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.198328 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx876\" (UniqueName: \"kubernetes.io/projected/bd3e0529-a99d-4174-a9b1-7a937bf09579-kube-api-access-wx876\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.206208 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" event={"ID":"0f72b13e-c9ce-4ada-ace2-432d17b8784e","Type":"ContainerStarted","Data":"8801cb26e5ab6bff4399925ac6528eee6eacaa8f70d6f7b1132e5670102e934d"} Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.208094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" event={"ID":"19efe92b-6dae-4b62-920b-0348877b5217","Type":"ContainerStarted","Data":"d11b503844eaadd883b61c61e258ddb663c86e39ea02bc9590c35b7494c11ebc"} Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.210805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.221938 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.273364 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvrb\" (UniqueName: \"kubernetes.io/projected/df93cd48-3695-40d8-a9e5-7321f57034ed-kube-api-access-ktvrb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2xvf8\" (UID: \"df93cd48-3695-40d8-a9e5-7321f57034ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.291758 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.301739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvrb\" (UniqueName: \"kubernetes.io/projected/df93cd48-3695-40d8-a9e5-7321f57034ed-kube-api-access-ktvrb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2xvf8\" (UID: \"df93cd48-3695-40d8-a9e5-7321f57034ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.331874 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx"] Jan 29 11:43:32 crc kubenswrapper[4766]: W0129 11:43:32.339299 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5afb01_eba4_4480_a437_4d2e0cdb16bb.slice/crio-184021fc258d50c3486d9f44c4fae6b20debb4d8836e781843176d435a1c6dc8 WatchSource:0}: Error finding container 184021fc258d50c3486d9f44c4fae6b20debb4d8836e781843176d435a1c6dc8: Status 404 returned error can't find the container with id 184021fc258d50c3486d9f44c4fae6b20debb4d8836e781843176d435a1c6dc8 Jan 29 11:43:32 crc kubenswrapper[4766]: W0129 11:43:32.349959 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d553af8_9c25_432a_bb68_5402fbd6221e.slice/crio-f9a6604c17930e94808878ea9df11fae3fb4ca78ec06a8d4623efe24d2ab6074 WatchSource:0}: Error finding container f9a6604c17930e94808878ea9df11fae3fb4ca78ec06a8d4623efe24d2ab6074: Status 404 returned error can't find the container with id f9a6604c17930e94808878ea9df11fae3fb4ca78ec06a8d4623efe24d2ab6074 Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.385185 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.385522 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.385577 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:33.385560647 +0000 UTC m=+1350.497953658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.419467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.547586 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.691254 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.691966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.692039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.692210 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.692256 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:33.692240699 +0000 UTC m=+1350.804633710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.692211 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.692330 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:33.692316961 +0000 UTC m=+1350.804709972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.699213 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d"] Jan 29 11:43:32 crc kubenswrapper[4766]: W0129 11:43:32.747056 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f5e2bf_dd4c_405f_9a1c_439c3abea9f6.slice/crio-159655706cf1dcc7106242226037a22a52fadd14470135a472948c105ee7ec27 WatchSource:0}: Error finding container 159655706cf1dcc7106242226037a22a52fadd14470135a472948c105ee7ec27: Status 404 returned error can't find the container with id 159655706cf1dcc7106242226037a22a52fadd14470135a472948c105ee7ec27 Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.918762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf"] Jan 29 11:43:32 crc kubenswrapper[4766]: W0129 11:43:32.926428 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2f7549b_08ae_4ec0_96ac_25997e35d30e.slice/crio-5dd9bd6b4ecd0070b74633b0b5eab5d460c27a360d25e76133372cf382597b13 WatchSource:0}: Error finding container 5dd9bd6b4ecd0070b74633b0b5eab5d460c27a360d25e76133372cf382597b13: Status 404 returned error can't find the container with id 5dd9bd6b4ecd0070b74633b0b5eab5d460c27a360d25e76133372cf382597b13 Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.930952 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l"] Jan 29 11:43:32 crc kubenswrapper[4766]: W0129 11:43:32.935916 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc98f5447_fb23_4b08_b5a8_70bce28d9bb7.slice/crio-c23d70d6798f573d066a0c795d9818e3c830378a119f77fe24be6a7c88536e67 WatchSource:0}: Error finding container c23d70d6798f573d066a0c795d9818e3c830378a119f77fe24be6a7c88536e67: Status 404 returned error can't find the container with id c23d70d6798f573d066a0c795d9818e3c830378a119f77fe24be6a7c88536e67 Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.949270 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv"] Jan 29 11:43:32 crc kubenswrapper[4766]: I0129 11:43:32.995520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.995706 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:32 crc kubenswrapper[4766]: E0129 11:43:32.995763 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:34.995745592 +0000 UTC m=+1352.108138603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.098156 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt"] Jan 29 11:43:33 crc kubenswrapper[4766]: W0129 11:43:33.101891 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215d318f_0aac_4fa1_9d80_c162d3922e62.slice/crio-fcf9699a2aae2c8d3b0efe46d8976ee721dae9af78242758cfd06dd406a75749 WatchSource:0}: Error finding container fcf9699a2aae2c8d3b0efe46d8976ee721dae9af78242758cfd06dd406a75749: Status 404 returned error can't find the container with id fcf9699a2aae2c8d3b0efe46d8976ee721dae9af78242758cfd06dd406a75749 Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.152741 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh"] Jan 29 11:43:33 crc kubenswrapper[4766]: W0129 11:43:33.156597 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16096f77_0fe2_498f_8b86_480d699b9fd6.slice/crio-94d21dc38735b182122d5192aeb4eb14146c0fb635c8a3b55c452369ee0b9559 WatchSource:0}: Error finding container 94d21dc38735b182122d5192aeb4eb14146c0fb635c8a3b55c452369ee0b9559: Status 404 returned error can't find the container with id 94d21dc38735b182122d5192aeb4eb14146c0fb635c8a3b55c452369ee0b9559 Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.160981 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-nn778"] Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.163065 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh"] Jan 29 11:43:33 crc kubenswrapper[4766]: W0129 11:43:33.182597 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38e05981_7669_4ee4_af1a_8ba826587cda.slice/crio-1b4209a5022e6eafaff59d63a381ac94c0c8187bc37007f62176a1184d988c57 WatchSource:0}: Error finding container 1b4209a5022e6eafaff59d63a381ac94c0c8187bc37007f62176a1184d988c57: Status 404 returned error can't find the container with id 1b4209a5022e6eafaff59d63a381ac94c0c8187bc37007f62176a1184d988c57 Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.189099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2"] Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.194132 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q87j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-6snl2_openstack-operators(cacfa742-e2bf-48f3-8da2-3a6f7d66f60e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.196799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" podUID="cacfa742-e2bf-48f3-8da2-3a6f7d66f60e" Jan 29 11:43:33 crc kubenswrapper[4766]: W0129 11:43:33.200603 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed499c2_b0c1_4fbb_b0e6_543d9f1ac230.slice/crio-0f1736127a9b6b6e63218eb382bbfbaacd9bce9d2ec6f5fcb787d4a8ea58ba58 WatchSource:0}: Error finding container 0f1736127a9b6b6e63218eb382bbfbaacd9bce9d2ec6f5fcb787d4a8ea58ba58: Status 404 returned error can't find the container with id 0f1736127a9b6b6e63218eb382bbfbaacd9bce9d2ec6f5fcb787d4a8ea58ba58 Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.205524 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s"] Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.209246 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nt7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-gth8s_openstack-operators(eed499c2-b0c1-4fbb-b0e6-543d9f1ac230): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.210650 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" podUID="eed499c2-b0c1-4fbb-b0e6-543d9f1ac230" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.233343 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" podUID="eed499c2-b0c1-4fbb-b0e6-543d9f1ac230" Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.242478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" event={"ID":"dbcef236-480a-41a2-8462-4695dc762ed1","Type":"ContainerStarted","Data":"93498b4f23f16432ea5223e9d72586adceb41d2ed628789e947d68d9c562aef2"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.242523 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" event={"ID":"eed499c2-b0c1-4fbb-b0e6-543d9f1ac230","Type":"ContainerStarted","Data":"0f1736127a9b6b6e63218eb382bbfbaacd9bce9d2ec6f5fcb787d4a8ea58ba58"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.242549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" event={"ID":"16096f77-0fe2-498f-8b86-480d699b9fd6","Type":"ContainerStarted","Data":"94d21dc38735b182122d5192aeb4eb14146c0fb635c8a3b55c452369ee0b9559"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.245916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" event={"ID":"38e05981-7669-4ee4-af1a-8ba826587cda","Type":"ContainerStarted","Data":"1b4209a5022e6eafaff59d63a381ac94c0c8187bc37007f62176a1184d988c57"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.284123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" event={"ID":"ab5afb01-eba4-4480-a437-4d2e0cdb16bb","Type":"ContainerStarted","Data":"184021fc258d50c3486d9f44c4fae6b20debb4d8836e781843176d435a1c6dc8"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.290741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" event={"ID":"d93f68fe-b726-4e2d-afa4-9b789a96dc55","Type":"ContainerStarted","Data":"8494fb352b3d1fc14e42c2525e956beaa307e228bf01036f49eb2107da5c71b9"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.292989 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" event={"ID":"7d553af8-9c25-432a-bb68-5402fbd6221e","Type":"ContainerStarted","Data":"f9a6604c17930e94808878ea9df11fae3fb4ca78ec06a8d4623efe24d2ab6074"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.294234 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r"] Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.296357 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" event={"ID":"a3ac13ec-bcf6-40f8-be96-d4302334f324","Type":"ContainerStarted","Data":"f9c3014d30ef17828b206a3c2d1396939ce76383844552bd22726e16dd3e3dbf"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.299689 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" event={"ID":"767a94c6-6767-4dc9-9054-70945f39e248","Type":"ContainerStarted","Data":"548acff921fe22dfa7043e4b5be26f5fa46cc054ad5394de5256e59d98024c32"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.301776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8"] Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.304683 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" event={"ID":"8e6fc747-e7e2-438d-a00e-3ab94b806035","Type":"ContainerStarted","Data":"4160f468b7a9c1b7b92da92aafec3f0bef784184eaa4951aa84c75fd86af10d0"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.305241 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zgjhb"] Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.307568 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" event={"ID":"cacfa742-e2bf-48f3-8da2-3a6f7d66f60e","Type":"ContainerStarted","Data":"b9cde51684f90308a1cd22c01c6cb36c84991d161b4839d3a2785958184d4d80"} Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.309600 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" podUID="cacfa742-e2bf-48f3-8da2-3a6f7d66f60e" Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.346983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" event={"ID":"c2f7549b-08ae-4ec0-96ac-25997e35d30e","Type":"ContainerStarted","Data":"5dd9bd6b4ecd0070b74633b0b5eab5d460c27a360d25e76133372cf382597b13"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.352192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" event={"ID":"a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6","Type":"ContainerStarted","Data":"159655706cf1dcc7106242226037a22a52fadd14470135a472948c105ee7ec27"} Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.357441 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g9sd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zgjhb_openstack-operators(248cfd14-922e-4123-9a39-849d292613f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.357820 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2m47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-w7n2r_openstack-operators(1d209024-1c33-4b4e-af1c-71c6039e69c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.371348 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" podUID="248cfd14-922e-4123-9a39-849d292613f0" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.371531 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" podUID="1d209024-1c33-4b4e-af1c-71c6039e69c9" Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.376826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" event={"ID":"c98f5447-fb23-4b08-b5a8-70bce28d9bb7","Type":"ContainerStarted","Data":"c23d70d6798f573d066a0c795d9818e3c830378a119f77fe24be6a7c88536e67"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.390078 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" event={"ID":"215d318f-0aac-4fa1-9d80-c162d3922e62","Type":"ContainerStarted","Data":"fcf9699a2aae2c8d3b0efe46d8976ee721dae9af78242758cfd06dd406a75749"} Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.401327 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.401608 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.401657 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:35.401643958 +0000 UTC m=+1352.514036969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.705548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:33 crc kubenswrapper[4766]: I0129 11:43:33.705952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.705868 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.706164 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:35.706144169 +0000 UTC m=+1352.818537180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.706102 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:33 crc kubenswrapper[4766]: E0129 11:43:33.706203 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:35.70619578 +0000 UTC m=+1352.818588791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:34 crc kubenswrapper[4766]: I0129 11:43:34.410268 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" event={"ID":"1d209024-1c33-4b4e-af1c-71c6039e69c9","Type":"ContainerStarted","Data":"cd6dbaef210e3130c701be742497a2445cc95087a26283d8b2cd8b3aab89386c"} Jan 29 11:43:34 crc kubenswrapper[4766]: I0129 11:43:34.412647 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" event={"ID":"df93cd48-3695-40d8-a9e5-7321f57034ed","Type":"ContainerStarted","Data":"e57fa90ac6a811d6b847014f79a9c8ec59923941dad0f1bd855cddaeb567364f"} Jan 29 11:43:34 crc kubenswrapper[4766]: E0129 11:43:34.412915 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" podUID="1d209024-1c33-4b4e-af1c-71c6039e69c9" Jan 29 11:43:34 crc kubenswrapper[4766]: I0129 11:43:34.415303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" event={"ID":"248cfd14-922e-4123-9a39-849d292613f0","Type":"ContainerStarted","Data":"a5854594c653c5d9410f2c1c8ab8df40f9adad48133f14a398e194ee149c224a"} Jan 29 11:43:34 crc kubenswrapper[4766]: E0129 11:43:34.417520 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" podUID="eed499c2-b0c1-4fbb-b0e6-543d9f1ac230" Jan 29 11:43:34 crc kubenswrapper[4766]: E0129 11:43:34.417618 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" podUID="cacfa742-e2bf-48f3-8da2-3a6f7d66f60e" Jan 29 11:43:34 crc kubenswrapper[4766]: E0129 11:43:34.421404 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" podUID="248cfd14-922e-4123-9a39-849d292613f0" Jan 29 11:43:35 crc kubenswrapper[4766]: I0129 11:43:35.026161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.026309 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.026393 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:39.026375331 +0000 UTC m=+1356.138768352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: I0129 11:43:35.435919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.436672 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.436720 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:39.436704638 +0000 UTC m=+1356.549097649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.465479 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" podUID="248cfd14-922e-4123-9a39-849d292613f0" Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.466003 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" podUID="1d209024-1c33-4b4e-af1c-71c6039e69c9" Jan 29 11:43:35 crc kubenswrapper[4766]: I0129 11:43:35.750623 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:35 crc kubenswrapper[4766]: I0129 11:43:35.750731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.750969 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.750969 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.751032 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:39.751018351 +0000 UTC m=+1356.863411362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:35 crc kubenswrapper[4766]: E0129 11:43:35.751047 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:39.751041642 +0000 UTC m=+1356.863434653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: I0129 11:43:39.100963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.101155 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.101480 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:47.101456701 +0000 UTC m=+1364.213849712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: I0129 11:43:39.508175 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.508345 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.508467 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:47.508446308 +0000 UTC m=+1364.620839329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: I0129 11:43:39.813738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:39 crc kubenswrapper[4766]: I0129 11:43:39.813821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.813921 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.813945 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.813988 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:47.813967471 +0000 UTC m=+1364.926360492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:39 crc kubenswrapper[4766]: E0129 11:43:39.814007 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:43:47.813998932 +0000 UTC m=+1364.926391943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:45 crc kubenswrapper[4766]: I0129 11:43:45.229999 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:43:45 crc kubenswrapper[4766]: E0129 11:43:45.685546 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 11:43:45 crc kubenswrapper[4766]: E0129 11:43:45.686094 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r72jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-zw2qx_openstack-operators(767a94c6-6767-4dc9-9054-70945f39e248): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:43:45 crc kubenswrapper[4766]: E0129 11:43:45.688058 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" podUID="767a94c6-6767-4dc9-9054-70945f39e248" Jan 29 11:43:46 crc kubenswrapper[4766]: E0129 11:43:46.543490 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" podUID="767a94c6-6767-4dc9-9054-70945f39e248" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.016279 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.016726 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2xvf8_openstack-operators(df93cd48-3695-40d8-a9e5-7321f57034ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.017869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" podUID="df93cd48-3695-40d8-a9e5-7321f57034ed" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.133283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.133454 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.133526 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert podName:fa222cda-3f2c-49fb-9d14-466dce8c9c40 nodeName:}" failed. No retries permitted until 2026-01-29 11:44:03.133506948 +0000 UTC m=+1380.245899949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert") pod "infra-operator-controller-manager-79955696d6-jhhql" (UID: "fa222cda-3f2c-49fb-9d14-466dce8c9c40") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.545240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.545436 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.545498 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert podName:8b1a55ff-1c16-4f2d-a92a-f00adeff5423 nodeName:}" failed. No retries permitted until 2026-01-29 11:44:03.545480124 +0000 UTC m=+1380.657873135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" (UID: "8b1a55ff-1c16-4f2d-a92a-f00adeff5423") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.571445 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" event={"ID":"215d318f-0aac-4fa1-9d80-c162d3922e62","Type":"ContainerStarted","Data":"bf9723eaf39d838395aa35c01598f8bea20b4b17c74f2da893dc7f9843ba07b4"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.572330 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.575993 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" event={"ID":"38e05981-7669-4ee4-af1a-8ba826587cda","Type":"ContainerStarted","Data":"09722e3bb1c2463d39fddf7cf2a910e5477e110fdb39ed75fd411e24f7392195"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.576261 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.578016 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" event={"ID":"7d553af8-9c25-432a-bb68-5402fbd6221e","Type":"ContainerStarted","Data":"de754192100c155b133350ca041d5fe308f599ce87d5f3c0220ec5c1b9b41d86"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.578743 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.582965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" event={"ID":"ab5afb01-eba4-4480-a437-4d2e0cdb16bb","Type":"ContainerStarted","Data":"b04e9da7a3dda85757bc21b24916bd97aee7dbaf05c41d82e67ec455e405f671"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.583524 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.590691 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" event={"ID":"d93f68fe-b726-4e2d-afa4-9b789a96dc55","Type":"ContainerStarted","Data":"c2ee76689eedbc32fcb8e2fbd2bc3fb919eb039323bffd884262f6e3f5332a9d"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.591266 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.592672 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" podStartSLOduration=2.640101365 podStartE2EDuration="16.592660082s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.103960866 +0000 UTC m=+1350.216353877" lastFinishedPulling="2026-01-29 11:43:47.056519583 +0000 UTC m=+1364.168912594" observedRunningTime="2026-01-29 11:43:47.587495249 +0000 UTC m=+1364.699888260" watchObservedRunningTime="2026-01-29 11:43:47.592660082 +0000 UTC m=+1364.705053093" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.616774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" event={"ID":"dbcef236-480a-41a2-8462-4695dc762ed1","Type":"ContainerStarted","Data":"4a783647b72bbff6bbcb159bff63692ced6d1f55c341af025df107f4be28adad"} Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.617754 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.623199 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.626463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" podUID="df93cd48-3695-40d8-a9e5-7321f57034ed" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.647958 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" podStartSLOduration=1.9573344449999999 podStartE2EDuration="16.647940096s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.353300482 +0000 UTC m=+1349.465693493" lastFinishedPulling="2026-01-29 11:43:47.043906133 +0000 UTC m=+1364.156299144" observedRunningTime="2026-01-29 11:43:47.647844163 +0000 UTC m=+1364.760237194" watchObservedRunningTime="2026-01-29 11:43:47.647940096 +0000 UTC m=+1364.760333107" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.685053 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" podStartSLOduration=2.056892776 podStartE2EDuration="16.685031364s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.410752676 +0000 UTC m=+1349.523145677" lastFinishedPulling="2026-01-29 11:43:47.038891264 +0000 UTC m=+1364.151284265" observedRunningTime="2026-01-29 11:43:47.681831915 +0000 UTC m=+1364.794224926" watchObservedRunningTime="2026-01-29 11:43:47.685031364 +0000 UTC m=+1364.797424375" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.716723 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" podStartSLOduration=2.8495394750000003 podStartE2EDuration="16.716703333s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.185660003 +0000 UTC m=+1350.298053014" lastFinishedPulling="2026-01-29 11:43:47.052823851 +0000 UTC m=+1364.165216872" observedRunningTime="2026-01-29 11:43:47.713444222 +0000 UTC m=+1364.825837253" watchObservedRunningTime="2026-01-29 11:43:47.716703333 +0000 UTC m=+1364.829096344" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.744804 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" podStartSLOduration=1.719899023 podStartE2EDuration="16.744787251s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.039786621 +0000 UTC m=+1349.152179632" lastFinishedPulling="2026-01-29 11:43:47.064674859 +0000 UTC m=+1364.177067860" observedRunningTime="2026-01-29 11:43:47.742130898 +0000 UTC m=+1364.854523929" watchObservedRunningTime="2026-01-29 11:43:47.744787251 +0000 UTC m=+1364.857180272" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.764743 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" podStartSLOduration=2.663714187 podStartE2EDuration="16.764726324s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.957685176 +0000 UTC m=+1350.070078177" lastFinishedPulling="2026-01-29 11:43:47.058697303 +0000 UTC m=+1364.171090314" observedRunningTime="2026-01-29 11:43:47.76385023 +0000 UTC m=+1364.876243231" watchObservedRunningTime="2026-01-29 11:43:47.764726324 +0000 UTC m=+1364.877119335" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.795880 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" podStartSLOduration=2.485086639 podStartE2EDuration="16.795860758s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.737612548 +0000 UTC m=+1349.850005559" lastFinishedPulling="2026-01-29 11:43:47.048386667 +0000 UTC m=+1364.160779678" observedRunningTime="2026-01-29 11:43:47.786461007 +0000 UTC m=+1364.898854018" watchObservedRunningTime="2026-01-29 11:43:47.795860758 +0000 UTC m=+1364.908253769" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.850667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:47 crc kubenswrapper[4766]: I0129 11:43:47.851038 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.851173 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.851219 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:44:03.851205603 +0000 UTC m=+1380.963598614 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "webhook-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.851515 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:43:47 crc kubenswrapper[4766]: E0129 11:43:47.851555 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs podName:bd3e0529-a99d-4174-a9b1-7a937bf09579 nodeName:}" failed. No retries permitted until 2026-01-29 11:44:03.851542682 +0000 UTC m=+1380.963935693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs") pod "openstack-operator-controller-manager-66cc5c7d8c-wrglp" (UID: "bd3e0529-a99d-4174-a9b1-7a937bf09579") : secret "metrics-server-cert" not found Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.652067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" event={"ID":"8e6fc747-e7e2-438d-a00e-3ab94b806035","Type":"ContainerStarted","Data":"7aab4e58b81872f929763398454498ef97536d46782008eb8f331f1bcf97eef2"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.652172 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.662282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" event={"ID":"a3ac13ec-bcf6-40f8-be96-d4302334f324","Type":"ContainerStarted","Data":"ba2d5f42fdb4213872a7482fd09550bd3a9eb89a66319b27e8d8421a67528783"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.662403 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.668697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" event={"ID":"19efe92b-6dae-4b62-920b-0348877b5217","Type":"ContainerStarted","Data":"1afbc2305bfa8df0b42496c42852ca9777b0f355661fabcbdefc6d669d6ef1b7"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.669458 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.675680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" event={"ID":"c2f7549b-08ae-4ec0-96ac-25997e35d30e","Type":"ContainerStarted","Data":"13d22f350d617e1ed5277b156c4da44e953cdb8ab521157f7276fc1cc4b91907"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.675843 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.677786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" event={"ID":"0f72b13e-c9ce-4ada-ace2-432d17b8784e","Type":"ContainerStarted","Data":"45bd75622c3fd695b0a7360875d092aef7fe10dd81153eca5a5fd82b1b492ba3"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.682140 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" event={"ID":"16096f77-0fe2-498f-8b86-480d699b9fd6","Type":"ContainerStarted","Data":"21ab05d325cef156c32135d77f0dddb7059bd3d6856d9e6c8f9799f471583580"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.682304 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.684814 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" podStartSLOduration=3.329849316 podStartE2EDuration="17.684796622s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.688574237 +0000 UTC m=+1349.800967238" lastFinishedPulling="2026-01-29 11:43:47.043521533 +0000 UTC m=+1364.155914544" observedRunningTime="2026-01-29 11:43:48.683750433 +0000 UTC m=+1365.796143444" watchObservedRunningTime="2026-01-29 11:43:48.684796622 +0000 UTC m=+1365.797189633" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.685021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" event={"ID":"a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6","Type":"ContainerStarted","Data":"b8842465b22e05cc261323e747bae605d6efe448452e4888d411a3a06c6f76db"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.685080 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.693243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" event={"ID":"c98f5447-fb23-4b08-b5a8-70bce28d9bb7","Type":"ContainerStarted","Data":"f1bab74ee6d5955f387f7fefa13dd229b3b36af541830d5020d9e86f8544cb0e"} Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.693295 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.720920 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" podStartSLOduration=3.826409368 podStartE2EDuration="17.720899863s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.16283115 +0000 UTC m=+1350.275224161" lastFinishedPulling="2026-01-29 11:43:47.057321645 +0000 UTC m=+1364.169714656" observedRunningTime="2026-01-29 11:43:48.71502849 +0000 UTC m=+1365.827421521" watchObservedRunningTime="2026-01-29 11:43:48.720899863 +0000 UTC m=+1365.833292874" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.742638 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" podStartSLOduration=3.804419178 podStartE2EDuration="17.742620596s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.164102565 +0000 UTC m=+1350.276495576" lastFinishedPulling="2026-01-29 11:43:47.102303963 +0000 UTC m=+1364.214696994" observedRunningTime="2026-01-29 11:43:48.74206523 +0000 UTC m=+1365.854458241" watchObservedRunningTime="2026-01-29 11:43:48.742620596 +0000 UTC m=+1365.855013607" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.769858 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" podStartSLOduration=3.639333284 podStartE2EDuration="17.76983821s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.928021263 +0000 UTC m=+1350.040414274" lastFinishedPulling="2026-01-29 11:43:47.058526189 +0000 UTC m=+1364.170919200" observedRunningTime="2026-01-29 11:43:48.766173679 +0000 UTC m=+1365.878566690" watchObservedRunningTime="2026-01-29 11:43:48.76983821 +0000 UTC m=+1365.882231221" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.789598 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" podStartSLOduration=2.873566581 podStartE2EDuration="17.789579318s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.126820786 +0000 UTC m=+1349.239213797" lastFinishedPulling="2026-01-29 11:43:47.042833523 +0000 UTC m=+1364.155226534" observedRunningTime="2026-01-29 11:43:48.786742859 +0000 UTC m=+1365.899135880" watchObservedRunningTime="2026-01-29 11:43:48.789579318 +0000 UTC m=+1365.901972329" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.812945 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" podStartSLOduration=3.692056596 podStartE2EDuration="17.812927105s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.937848965 +0000 UTC m=+1350.050241976" lastFinishedPulling="2026-01-29 11:43:47.058719474 +0000 UTC m=+1364.171112485" observedRunningTime="2026-01-29 11:43:48.811572818 +0000 UTC m=+1365.923965859" watchObservedRunningTime="2026-01-29 11:43:48.812927105 +0000 UTC m=+1365.925320116" Jan 29 11:43:48 crc kubenswrapper[4766]: I0129 11:43:48.837242 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" podStartSLOduration=3.551721771 podStartE2EDuration="17.837221029s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.757340226 +0000 UTC m=+1349.869733227" lastFinishedPulling="2026-01-29 11:43:47.042839474 +0000 UTC m=+1364.155232485" observedRunningTime="2026-01-29 11:43:48.835825541 +0000 UTC m=+1365.948218562" watchObservedRunningTime="2026-01-29 11:43:48.837221029 +0000 UTC m=+1365.949614060" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.727307 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" event={"ID":"eed499c2-b0c1-4fbb-b0e6-543d9f1ac230","Type":"ContainerStarted","Data":"28049d4e4de352fab0c3b14dc36134b511c527a9ac463f0ef72b78864a556d42"} Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.727851 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.729096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" event={"ID":"1d209024-1c33-4b4e-af1c-71c6039e69c9","Type":"ContainerStarted","Data":"2d8e39796affbac9a6c158a963d2b157f228b88484abaa55a013766bce378330"} Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.729266 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.731196 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" event={"ID":"cacfa742-e2bf-48f3-8da2-3a6f7d66f60e","Type":"ContainerStarted","Data":"e73dee090ef1002e1cca213aefcb6d923facc69e28562dcd965da4bfba859010"} Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.731487 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.746999 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" podStartSLOduration=2.8176322110000003 podStartE2EDuration="20.746976977s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.209096204 +0000 UTC m=+1350.321489215" lastFinishedPulling="2026-01-29 11:43:51.13844097 +0000 UTC m=+1368.250833981" observedRunningTime="2026-01-29 11:43:51.741607548 +0000 UTC m=+1368.854000569" watchObservedRunningTime="2026-01-29 11:43:51.746976977 +0000 UTC m=+1368.859369988" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.760336 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" podStartSLOduration=2.801804711 podStartE2EDuration="20.760317137s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.193700606 +0000 UTC m=+1350.306093617" lastFinishedPulling="2026-01-29 11:43:51.152213032 +0000 UTC m=+1368.264606043" observedRunningTime="2026-01-29 11:43:51.756896952 +0000 UTC m=+1368.869289973" watchObservedRunningTime="2026-01-29 11:43:51.760317137 +0000 UTC m=+1368.872710158" Jan 29 11:43:51 crc kubenswrapper[4766]: I0129 11:43:51.778709 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" podStartSLOduration=3.004073795 podStartE2EDuration="20.778691737s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.357338328 +0000 UTC m=+1350.469731339" lastFinishedPulling="2026-01-29 11:43:51.13195627 +0000 UTC m=+1368.244349281" observedRunningTime="2026-01-29 11:43:51.775037756 +0000 UTC m=+1368.887430767" watchObservedRunningTime="2026-01-29 11:43:51.778691737 +0000 UTC m=+1368.891084758" Jan 29 11:43:52 crc kubenswrapper[4766]: I0129 11:43:52.062634 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-8xvmt" Jan 29 11:43:52 crc kubenswrapper[4766]: I0129 11:43:52.740681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" event={"ID":"248cfd14-922e-4123-9a39-849d292613f0","Type":"ContainerStarted","Data":"ab88918124558e73e01439f77b0d2bc8e99d3da5425e2d61f0928250ea92261d"} Jan 29 11:43:52 crc kubenswrapper[4766]: I0129 11:43:52.741355 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:43:52 crc kubenswrapper[4766]: I0129 11:43:52.759529 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" podStartSLOduration=2.9236254539999997 podStartE2EDuration="21.759511109s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.357226805 +0000 UTC m=+1350.469619816" lastFinishedPulling="2026-01-29 11:43:52.19311246 +0000 UTC m=+1369.305505471" observedRunningTime="2026-01-29 11:43:52.754206402 +0000 UTC m=+1369.866599423" watchObservedRunningTime="2026-01-29 11:43:52.759511109 +0000 UTC m=+1369.871904130" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.395295 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hn2zr" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.419863 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-zqq8r" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.432359 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4j7sq" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.488913 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-rgj67" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.521511 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gwttp" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.560960 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-c5gtf" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.614690 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kbqkb" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.645491 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cb57l" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.677672 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-gpc9d" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.802727 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" event={"ID":"767a94c6-6767-4dc9-9054-70945f39e248","Type":"ContainerStarted","Data":"df42bf177c59fee7d9a4823389d122402e2b50ed189f07705facc51ef183c3cb"} Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.802922 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.804510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" event={"ID":"df93cd48-3695-40d8-a9e5-7321f57034ed","Type":"ContainerStarted","Data":"2c829265776760bd15c67358847fdde591e37af5e15c5bbae044469218bd4807"} Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.821292 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" podStartSLOduration=2.315423467 podStartE2EDuration="30.821274234s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:32.419864759 +0000 UTC m=+1349.532257770" lastFinishedPulling="2026-01-29 11:44:00.925715526 +0000 UTC m=+1378.038108537" observedRunningTime="2026-01-29 11:44:01.817334294 +0000 UTC m=+1378.929727305" watchObservedRunningTime="2026-01-29 11:44:01.821274234 +0000 UTC m=+1378.933667245" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.837274 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-g5nzh" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.839121 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2xvf8" podStartSLOduration=3.5169582090000002 podStartE2EDuration="30.839101598s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:43:33.356711241 +0000 UTC m=+1350.469104252" lastFinishedPulling="2026-01-29 11:44:00.67885463 +0000 UTC m=+1377.791247641" observedRunningTime="2026-01-29 11:44:01.833202575 +0000 UTC m=+1378.945595596" watchObservedRunningTime="2026-01-29 11:44:01.839101598 +0000 UTC m=+1378.951494609" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.879540 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-nn778" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.918337 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-lc5rh" Jan 29 11:44:01 crc kubenswrapper[4766]: I0129 11:44:01.945179 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-bcpmv" Jan 29 11:44:02 crc kubenswrapper[4766]: I0129 11:44:02.095596 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gth8s" Jan 29 11:44:02 crc kubenswrapper[4766]: I0129 11:44:02.159048 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-w7n2r" Jan 29 11:44:02 crc kubenswrapper[4766]: I0129 11:44:02.214293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6snl2" Jan 29 11:44:02 crc kubenswrapper[4766]: I0129 11:44:02.229366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-zgjhb" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.180657 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.186998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa222cda-3f2c-49fb-9d14-466dce8c9c40-cert\") pod \"infra-operator-controller-manager-79955696d6-jhhql\" (UID: \"fa222cda-3f2c-49fb-9d14-466dce8c9c40\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.367382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.587684 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.599308 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b1a55ff-1c16-4f2d-a92a-f00adeff5423-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2\" (UID: \"8b1a55ff-1c16-4f2d-a92a-f00adeff5423\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.780077 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jhhql"] Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.815761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.821592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" event={"ID":"fa222cda-3f2c-49fb-9d14-466dce8c9c40","Type":"ContainerStarted","Data":"3a1c6d8b4a9c4da8148b21cf5359016ab0fc0aa20f90f1a71064f1df2e9aa99e"} Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.892067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.892167 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.898744 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-webhook-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:03 crc kubenswrapper[4766]: I0129 11:44:03.900220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd3e0529-a99d-4174-a9b1-7a937bf09579-metrics-certs\") pod \"openstack-operator-controller-manager-66cc5c7d8c-wrglp\" (UID: \"bd3e0529-a99d-4174-a9b1-7a937bf09579\") " pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.022779 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2"] Jan 29 11:44:04 crc kubenswrapper[4766]: W0129 11:44:04.024967 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b1a55ff_1c16_4f2d_a92a_f00adeff5423.slice/crio-dd3b8f3ddd1d3dc76efef3b742d8d18514e5439e32423bc62e919464d3c001ba WatchSource:0}: Error finding container dd3b8f3ddd1d3dc76efef3b742d8d18514e5439e32423bc62e919464d3c001ba: Status 404 returned error can't find the container with id dd3b8f3ddd1d3dc76efef3b742d8d18514e5439e32423bc62e919464d3c001ba Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.115059 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.532600 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp"] Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.833821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" event={"ID":"8b1a55ff-1c16-4f2d-a92a-f00adeff5423","Type":"ContainerStarted","Data":"dd3b8f3ddd1d3dc76efef3b742d8d18514e5439e32423bc62e919464d3c001ba"} Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.835608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" event={"ID":"bd3e0529-a99d-4174-a9b1-7a937bf09579","Type":"ContainerStarted","Data":"1cff64582261629eb549332bb55c047054f7ae11dd3b5a1e0e530d82be20ec51"} Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.835656 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" event={"ID":"bd3e0529-a99d-4174-a9b1-7a937bf09579","Type":"ContainerStarted","Data":"44a517f67d860820e9fb8565b6aa9afe53763c8ffaadc5f0be297bac0dc26ff7"} Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.836026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:04 crc kubenswrapper[4766]: I0129 11:44:04.873561 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" podStartSLOduration=33.873535754 podStartE2EDuration="33.873535754s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:44:04.864281377 +0000 UTC m=+1381.976674408" watchObservedRunningTime="2026-01-29 11:44:04.873535754 +0000 UTC m=+1381.985928775" Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.850299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" event={"ID":"fa222cda-3f2c-49fb-9d14-466dce8c9c40","Type":"ContainerStarted","Data":"c47caa76e18443d051fac898cd407e12bd5bb27e381347fb6cb1873431f81789"} Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.850612 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.852179 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" event={"ID":"8b1a55ff-1c16-4f2d-a92a-f00adeff5423","Type":"ContainerStarted","Data":"5039d628742475a2befb541a84508b1a2d849d86461efe820ee7fa92536727c7"} Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.852341 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.870500 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" podStartSLOduration=33.41508076 podStartE2EDuration="35.870481897s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:44:03.792293258 +0000 UTC m=+1380.904686279" lastFinishedPulling="2026-01-29 11:44:06.247694405 +0000 UTC m=+1383.360087416" observedRunningTime="2026-01-29 11:44:06.865347475 +0000 UTC m=+1383.977740486" watchObservedRunningTime="2026-01-29 11:44:06.870481897 +0000 UTC m=+1383.982874908" Jan 29 11:44:06 crc kubenswrapper[4766]: I0129 11:44:06.896836 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" podStartSLOduration=33.673498218 podStartE2EDuration="35.896820518s" podCreationTimestamp="2026-01-29 11:43:31 +0000 UTC" firstStartedPulling="2026-01-29 11:44:04.026945716 +0000 UTC m=+1381.139338727" lastFinishedPulling="2026-01-29 11:44:06.250268016 +0000 UTC m=+1383.362661027" observedRunningTime="2026-01-29 11:44:06.892879948 +0000 UTC m=+1384.005272969" watchObservedRunningTime="2026-01-29 11:44:06.896820518 +0000 UTC m=+1384.009213529" Jan 29 11:44:11 crc kubenswrapper[4766]: I0129 11:44:11.494306 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zw2qx" Jan 29 11:44:13 crc kubenswrapper[4766]: I0129 11:44:13.374237 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jhhql" Jan 29 11:44:13 crc kubenswrapper[4766]: I0129 11:44:13.824979 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2" Jan 29 11:44:14 crc kubenswrapper[4766]: I0129 11:44:14.122699 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66cc5c7d8c-wrglp" Jan 29 11:44:16 crc kubenswrapper[4766]: I0129 11:44:16.362134 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:44:16 crc kubenswrapper[4766]: I0129 11:44:16.362218 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.454006 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.455605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.460936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.460993 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.461015 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.461015 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-45d7m" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.461965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.497385 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.499430 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.504341 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.529740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.534343 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.534442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4t6\" (UniqueName: \"kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.636079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.636145 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4t6\" (UniqueName: \"kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.636178 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.636207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxlz\" (UniqueName: \"kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.636241 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.637101 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.685622 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4t6\" (UniqueName: \"kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6\") pod \"dnsmasq-dns-675f4bcbfc-tq8l9\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.738517 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.738587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxlz\" (UniqueName: \"kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.738634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.740024 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.751879 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.775276 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxlz\" (UniqueName: \"kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz\") pod \"dnsmasq-dns-78dd6ddcc-zkzsj\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.785979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:27 crc kubenswrapper[4766]: I0129 11:44:27.820221 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:28 crc kubenswrapper[4766]: I0129 11:44:28.044855 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:28 crc kubenswrapper[4766]: W0129 11:44:28.052772 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34bb5022_7e43_4c49_9ba9_cb5cfdb809a1.slice/crio-97465d78eac2d57a2b722969f03adecb8504faed997894fec9e189f959462859 WatchSource:0}: Error finding container 97465d78eac2d57a2b722969f03adecb8504faed997894fec9e189f959462859: Status 404 returned error can't find the container with id 97465d78eac2d57a2b722969f03adecb8504faed997894fec9e189f959462859 Jan 29 11:44:28 crc kubenswrapper[4766]: I0129 11:44:28.127767 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:28 crc kubenswrapper[4766]: W0129 11:44:28.134974 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e20bd3_1936_4f20_a093_0ca7a32de11f.slice/crio-5f57290575e3503531184d28f66281da0141b571ba147272ac86cc527dfcf619 WatchSource:0}: Error finding container 5f57290575e3503531184d28f66281da0141b571ba147272ac86cc527dfcf619: Status 404 returned error can't find the container with id 5f57290575e3503531184d28f66281da0141b571ba147272ac86cc527dfcf619 Jan 29 11:44:29 crc kubenswrapper[4766]: I0129 11:44:29.018290 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" event={"ID":"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1","Type":"ContainerStarted","Data":"97465d78eac2d57a2b722969f03adecb8504faed997894fec9e189f959462859"} Jan 29 11:44:29 crc kubenswrapper[4766]: I0129 11:44:29.019989 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" event={"ID":"b2e20bd3-1936-4f20-a093-0ca7a32de11f","Type":"ContainerStarted","Data":"5f57290575e3503531184d28f66281da0141b571ba147272ac86cc527dfcf619"} Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.002487 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.041022 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.042355 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.048123 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.173547 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.173599 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.173631 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxtfs\" (UniqueName: \"kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.275127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.275198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.275237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxtfs\" (UniqueName: \"kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.277539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.277782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.308503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxtfs\" (UniqueName: \"kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs\") pod \"dnsmasq-dns-5ccc8479f9-njcbw\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.312833 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.340197 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.345565 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.350598 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.374485 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.477460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.477574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnbs\" (UniqueName: \"kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.477631 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.579116 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.579201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.579248 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtnbs\" (UniqueName: \"kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.580603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.580682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.596639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtnbs\" (UniqueName: \"kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs\") pod \"dnsmasq-dns-57d769cc4f-fzpsl\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.671009 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:30 crc kubenswrapper[4766]: I0129 11:44:30.857888 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.178814 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.192617 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.194195 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.200398 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.200722 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4pchg" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.200932 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.201046 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.201155 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.201310 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.201597 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.234504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291023 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291071 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291161 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6btz\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291271 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.291439 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393349 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393386 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393464 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393489 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393565 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6btz\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.393650 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.394256 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.394385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.394543 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.394632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.394643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.396402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.399397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.399603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.405538 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.415912 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6btz\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.418338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.429746 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.481868 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.483464 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486272 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486387 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486454 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486549 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q85zz" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486602 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486662 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.486888 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.487547 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.523235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597715 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597795 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597873 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbqg\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.597998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.598072 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.598100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.699904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.700056 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.701498 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.701724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbqg\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.701947 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.702156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.702272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.702334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.702852 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.702951 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.703179 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.706106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.706173 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.710114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.712903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.720212 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbqg\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.725129 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " pod="openstack/rabbitmq-server-0" Jan 29 11:44:31 crc kubenswrapper[4766]: I0129 11:44:31.809807 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.660200 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.665181 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.676827 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5cjjw" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.676825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.677762 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.678054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.678758 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.683967 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717666 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717801 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.717942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.718011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjd7l\" (UniqueName: \"kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.818963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjd7l\" (UniqueName: \"kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819019 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819077 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819546 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819715 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.819755 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.821809 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.825206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.825999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.830669 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.832085 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.847237 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjd7l\" (UniqueName: \"kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.884945 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " pod="openstack/openstack-galera-0" Jan 29 11:44:32 crc kubenswrapper[4766]: I0129 11:44:32.997961 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.139047 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.140805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.143084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.143182 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.147278 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.149183 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-f9qwd" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.172837 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343509 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343550 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343700 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.343784 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chlbf\" (UniqueName: \"kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chlbf\" (UniqueName: \"kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.446972 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.447244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.449170 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.451078 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.451766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.457745 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.463306 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.469190 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chlbf\" (UniqueName: \"kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.484401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.643570 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.644730 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.651980 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.652192 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zzbws" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.652302 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.662913 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.754096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.754160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.754205 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.754237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4rw\" (UniqueName: \"kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.754313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.765162 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.855491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.855603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.855640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.855680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.855708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4rw\" (UniqueName: \"kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.856980 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.857562 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.860676 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.872543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.891919 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4rw\" (UniqueName: \"kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw\") pod \"memcached-0\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " pod="openstack/memcached-0" Jan 29 11:44:34 crc kubenswrapper[4766]: I0129 11:44:34.970787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.083044 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" event={"ID":"a3efa3a7-d212-4ae0-8f0a-47b25153393b","Type":"ContainerStarted","Data":"f26424cac0e822ae30593b2e607e1e2cda3566c61df40911cc87db55f9df1905"} Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.084438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" event={"ID":"02f91ca7-9f21-4f64-97ca-3d670aa1e439","Type":"ContainerStarted","Data":"a275c8d368fb555a7d689a88372240ca819df9a9d350ecfa837534c66dba17ff"} Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.737542 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.738564 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.740508 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pwqbp" Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.747660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.892973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s7h\" (UniqueName: \"kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h\") pod \"kube-state-metrics-0\" (UID: \"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4\") " pod="openstack/kube-state-metrics-0" Jan 29 11:44:36 crc kubenswrapper[4766]: I0129 11:44:36.994186 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s7h\" (UniqueName: \"kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h\") pod \"kube-state-metrics-0\" (UID: \"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4\") " pod="openstack/kube-state-metrics-0" Jan 29 11:44:37 crc kubenswrapper[4766]: I0129 11:44:37.019533 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s7h\" (UniqueName: \"kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h\") pod \"kube-state-metrics-0\" (UID: \"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4\") " pod="openstack/kube-state-metrics-0" Jan 29 11:44:37 crc kubenswrapper[4766]: I0129 11:44:37.055393 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:44:37 crc kubenswrapper[4766]: I0129 11:44:37.949949 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.702139 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.703857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.706767 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.706784 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6vn2p" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.707056 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.707245 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.707458 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.713521 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.812754 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.814056 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.823077 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.823311 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c6spc" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.823536 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.833210 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.874808 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.874892 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.874924 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.874958 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.874991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.875036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.875070 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.875096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs9v\" (UniqueName: \"kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.929057 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.931080 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.956175 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976600 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.976947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs9v\" (UniqueName: \"kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977250 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlrd\" (UniqueName: \"kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.978259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.977127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.978862 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.979060 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.994089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:40 crc kubenswrapper[4766]: I0129 11:44:40.997389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.007706 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.010122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.023869 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs9v\" (UniqueName: \"kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v\") pod \"ovsdbserver-nb-0\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.036171 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078464 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078519 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlrd\" (UniqueName: \"kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078596 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqtd\" (UniqueName: \"kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078675 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.078699 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.079300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.079932 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.080994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.081108 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.081150 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.081207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.082325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.082484 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.083644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.085179 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.095796 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlrd\" (UniqueName: \"kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd\") pod \"ovn-controller-5kz4c\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.165303 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183175 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bqtd\" (UniqueName: \"kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183725 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.183886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.196313 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.199580 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bqtd\" (UniqueName: \"kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd\") pod \"ovn-controller-ovs-2gh2n\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:41 crc kubenswrapper[4766]: I0129 11:44:41.261789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.144008 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerStarted","Data":"e3f8c6bef4110ccd8ee7e99ab184ea1f1b3f275259777a50ef5d0ce2c92300f9"} Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.300721 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.356963 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.360429 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.365203 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.365385 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.366706 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.367556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.367625 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-czkp4" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.456968 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457267 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457302 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mb62\" (UniqueName: \"kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457527 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457586 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.457628 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559214 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mb62\" (UniqueName: \"kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559287 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559329 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.559348 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.560541 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.560695 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.561054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.561763 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.567275 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.567290 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.567298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.578174 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mb62\" (UniqueName: \"kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.586636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:43 crc kubenswrapper[4766]: I0129 11:44:43.683891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:44 crc kubenswrapper[4766]: W0129 11:44:44.122286 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb77b577e_b980_46fb_945a_a0b57e3bdc17.slice/crio-15ef1f5966922a37eda2628875b1ee98ab9d0b61f0383424299889a86ad47c85 WatchSource:0}: Error finding container 15ef1f5966922a37eda2628875b1ee98ab9d0b61f0383424299889a86ad47c85: Status 404 returned error can't find the container with id 15ef1f5966922a37eda2628875b1ee98ab9d0b61f0383424299889a86ad47c85 Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.153119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerStarted","Data":"15ef1f5966922a37eda2628875b1ee98ab9d0b61f0383424299889a86ad47c85"} Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.168074 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.168223 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zkzsj_openstack(b2e20bd3-1936-4f20-a093-0ca7a32de11f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.169571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" podUID="b2e20bd3-1936-4f20-a093-0ca7a32de11f" Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.252978 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.253261 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rq4t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tq8l9_openstack(34bb5022-7e43-4c49-9ba9-cb5cfdb809a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:44:44 crc kubenswrapper[4766]: E0129 11:44:44.254799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" podUID="34bb5022-7e43-4c49-9ba9-cb5cfdb809a1" Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.419206 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:44:44 crc kubenswrapper[4766]: W0129 11:44:44.445525 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea239fdb_85e2_48e6_b992_42bd9f7e66c8.slice/crio-9c115680c0bba07a9f9fdd83f98e5f90220c0d55e638f49633339c0ae7682697 WatchSource:0}: Error finding container 9c115680c0bba07a9f9fdd83f98e5f90220c0d55e638f49633339c0ae7682697: Status 404 returned error can't find the container with id 9c115680c0bba07a9f9fdd83f98e5f90220c0d55e638f49633339c0ae7682697 Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.686481 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.704693 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:44:44 crc kubenswrapper[4766]: W0129 11:44:44.713528 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace2f6ec_cf57_4742_82e9_e13fd230bb69.slice/crio-3740649b0c5c9f9d1f5ab11a33af00c8c252eee44743049d87a9d18daa6871f8 WatchSource:0}: Error finding container 3740649b0c5c9f9d1f5ab11a33af00c8c252eee44743049d87a9d18daa6871f8: Status 404 returned error can't find the container with id 3740649b0c5c9f9d1f5ab11a33af00c8c252eee44743049d87a9d18daa6871f8 Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.804841 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.815358 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.886544 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:44:44 crc kubenswrapper[4766]: W0129 11:44:44.892863 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc961d826_8e7c_45cf_afa0_a1712a3def4f.slice/crio-5b8fb0e84bb620bce5c18a5dcbdf70d19eb3cff887eacb2d543e27f4e9dc6f9f WatchSource:0}: Error finding container 5b8fb0e84bb620bce5c18a5dcbdf70d19eb3cff887eacb2d543e27f4e9dc6f9f: Status 404 returned error can't find the container with id 5b8fb0e84bb620bce5c18a5dcbdf70d19eb3cff887eacb2d543e27f4e9dc6f9f Jan 29 11:44:44 crc kubenswrapper[4766]: I0129 11:44:44.933318 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:44:44 crc kubenswrapper[4766]: W0129 11:44:44.946204 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe830961_a6c3_4340_a134_ea20de96b31b.slice/crio-123ae6f0b0c8f489594c3f72a5b97a2ba7c2ca88afbbfb473c9f02131f30b28d WatchSource:0}: Error finding container 123ae6f0b0c8f489594c3f72a5b97a2ba7c2ca88afbbfb473c9f02131f30b28d: Status 404 returned error can't find the container with id 123ae6f0b0c8f489594c3f72a5b97a2ba7c2ca88afbbfb473c9f02131f30b28d Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.163038 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerStarted","Data":"123ae6f0b0c8f489594c3f72a5b97a2ba7c2ca88afbbfb473c9f02131f30b28d"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.165319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4","Type":"ContainerStarted","Data":"cecdbf445115816e29318051e1778a98c9f25d03aeb20159fd09caaa8f0ee697"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.166796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerStarted","Data":"5b8fb0e84bb620bce5c18a5dcbdf70d19eb3cff887eacb2d543e27f4e9dc6f9f"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.168788 4766 generic.go:334] "Generic (PLEG): container finished" podID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerID="a5ebc7120991976ef12d27b6c5118484e479e63351d1d238e18d542010658411" exitCode=0 Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.168815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" event={"ID":"a3efa3a7-d212-4ae0-8f0a-47b25153393b","Type":"ContainerDied","Data":"a5ebc7120991976ef12d27b6c5118484e479e63351d1d238e18d542010658411"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.171850 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d00d673d-aea5-4014-8e2b-bcb78afb7606","Type":"ContainerStarted","Data":"c9c8fb0afaaf1c81a8af8256095a8dec5f807b9051a457ec08e7a651f681805c"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.181673 4766 generic.go:334] "Generic (PLEG): container finished" podID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerID="ccb294817b1b2e24b0fb96cb398a8e43dd625150fa3f6ddb01683dd087ae0e29" exitCode=0 Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.181786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" event={"ID":"02f91ca7-9f21-4f64-97ca-3d670aa1e439","Type":"ContainerDied","Data":"ccb294817b1b2e24b0fb96cb398a8e43dd625150fa3f6ddb01683dd087ae0e29"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.183763 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerStarted","Data":"3740649b0c5c9f9d1f5ab11a33af00c8c252eee44743049d87a9d18daa6871f8"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.186293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c" event={"ID":"73cf0e15-caab-4cea-94b5-7470d635d767","Type":"ContainerStarted","Data":"c28c597ce46a345605c7d91b21af94df40025b19ddd33835eea147c3d4543a81"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.192435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerStarted","Data":"9c115680c0bba07a9f9fdd83f98e5f90220c0d55e638f49633339c0ae7682697"} Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.569112 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.573916 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.607208 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config\") pod \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.607269 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq4t6\" (UniqueName: \"kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6\") pod \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.607325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc\") pod \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.607374 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxlz\" (UniqueName: \"kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz\") pod \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\" (UID: \"b2e20bd3-1936-4f20-a093-0ca7a32de11f\") " Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.607492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config\") pod \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\" (UID: \"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1\") " Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.608543 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config" (OuterVolumeSpecName: "config") pod "34bb5022-7e43-4c49-9ba9-cb5cfdb809a1" (UID: "34bb5022-7e43-4c49-9ba9-cb5cfdb809a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.608773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config" (OuterVolumeSpecName: "config") pod "b2e20bd3-1936-4f20-a093-0ca7a32de11f" (UID: "b2e20bd3-1936-4f20-a093-0ca7a32de11f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.609150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2e20bd3-1936-4f20-a093-0ca7a32de11f" (UID: "b2e20bd3-1936-4f20-a093-0ca7a32de11f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.614482 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6" (OuterVolumeSpecName: "kube-api-access-rq4t6") pod "34bb5022-7e43-4c49-9ba9-cb5cfdb809a1" (UID: "34bb5022-7e43-4c49-9ba9-cb5cfdb809a1"). InnerVolumeSpecName "kube-api-access-rq4t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.614547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz" (OuterVolumeSpecName: "kube-api-access-njxlz") pod "b2e20bd3-1936-4f20-a093-0ca7a32de11f" (UID: "b2e20bd3-1936-4f20-a093-0ca7a32de11f"). InnerVolumeSpecName "kube-api-access-njxlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.709816 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.709849 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.709860 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq4t6\" (UniqueName: \"kubernetes.io/projected/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1-kube-api-access-rq4t6\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.709871 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e20bd3-1936-4f20-a093-0ca7a32de11f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.709880 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njxlz\" (UniqueName: \"kubernetes.io/projected/b2e20bd3-1936-4f20-a093-0ca7a32de11f-kube-api-access-njxlz\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:45 crc kubenswrapper[4766]: I0129 11:44:45.916404 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:44:46 crc kubenswrapper[4766]: W0129 11:44:46.045200 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51f2b06e_748d_4bb1_b7e7_f5cd039a532d.slice/crio-4b038c07875536956bb8fa9ef2ba60d199c6693186e25830439d36a2a72eac99 WatchSource:0}: Error finding container 4b038c07875536956bb8fa9ef2ba60d199c6693186e25830439d36a2a72eac99: Status 404 returned error can't find the container with id 4b038c07875536956bb8fa9ef2ba60d199c6693186e25830439d36a2a72eac99 Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.203645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerStarted","Data":"4b038c07875536956bb8fa9ef2ba60d199c6693186e25830439d36a2a72eac99"} Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.206079 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" event={"ID":"a3efa3a7-d212-4ae0-8f0a-47b25153393b","Type":"ContainerStarted","Data":"2f07a103571fa28698cfe31214237075edcd2173d830bacd5b3be17e47817d3d"} Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.206286 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.208772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" event={"ID":"34bb5022-7e43-4c49-9ba9-cb5cfdb809a1","Type":"ContainerDied","Data":"97465d78eac2d57a2b722969f03adecb8504faed997894fec9e189f959462859"} Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.208793 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq8l9" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.211024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" event={"ID":"02f91ca7-9f21-4f64-97ca-3d670aa1e439","Type":"ContainerStarted","Data":"8d645c2b38d3968609f27f6179521ca700df90dc534bad66cd33d0aa893cf82a"} Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.211148 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.212124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" event={"ID":"b2e20bd3-1936-4f20-a093-0ca7a32de11f","Type":"ContainerDied","Data":"5f57290575e3503531184d28f66281da0141b571ba147272ac86cc527dfcf619"} Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.212235 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zkzsj" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.235918 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" podStartSLOduration=7.260439558 podStartE2EDuration="16.23589954s" podCreationTimestamp="2026-01-29 11:44:30 +0000 UTC" firstStartedPulling="2026-01-29 11:44:35.287533601 +0000 UTC m=+1412.399926612" lastFinishedPulling="2026-01-29 11:44:44.262993593 +0000 UTC m=+1421.375386594" observedRunningTime="2026-01-29 11:44:46.227773214 +0000 UTC m=+1423.340166245" watchObservedRunningTime="2026-01-29 11:44:46.23589954 +0000 UTC m=+1423.348292561" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.251441 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" podStartSLOduration=7.2605129 podStartE2EDuration="16.25142372s" podCreationTimestamp="2026-01-29 11:44:30 +0000 UTC" firstStartedPulling="2026-01-29 11:44:35.287090959 +0000 UTC m=+1412.399483970" lastFinishedPulling="2026-01-29 11:44:44.278001779 +0000 UTC m=+1421.390394790" observedRunningTime="2026-01-29 11:44:46.246786052 +0000 UTC m=+1423.359179063" watchObservedRunningTime="2026-01-29 11:44:46.25142372 +0000 UTC m=+1423.363816741" Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.285651 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.302145 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zkzsj"] Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.318660 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.334836 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq8l9"] Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.362342 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:44:46 crc kubenswrapper[4766]: I0129 11:44:46.362430 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.234889 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34bb5022-7e43-4c49-9ba9-cb5cfdb809a1" path="/var/lib/kubelet/pods/34bb5022-7e43-4c49-9ba9-cb5cfdb809a1/volumes" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.235586 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e20bd3-1936-4f20-a093-0ca7a32de11f" path="/var/lib/kubelet/pods/b2e20bd3-1936-4f20-a093-0ca7a32de11f/volumes" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.520249 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.521252 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.527106 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.537094 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563860 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdd9\" (UniqueName: \"kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.563916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.654758 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsdd9\" (UniqueName: \"kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.665628 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.666055 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.667001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.678181 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.682242 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.690338 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.691900 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.703357 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.703988 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsdd9\" (UniqueName: \"kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9\") pod \"ovn-controller-metrics-pmh5k\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.725913 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.766706 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.766783 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhnz7\" (UniqueName: \"kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.766973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.767153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.813764 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.842801 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.851365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.853086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.855335 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.868809 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.868875 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.868913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.869923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.877050 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhnz7\" (UniqueName: \"kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.878139 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.878551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.881874 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.919884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhnz7\" (UniqueName: \"kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7\") pod \"dnsmasq-dns-5bf47b49b7-6jx2g\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.978881 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.978947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.978985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfbr\" (UniqueName: \"kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.979077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:47 crc kubenswrapper[4766]: I0129 11:44:47.979121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.061784 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.081106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.081202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.081243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfbr\" (UniqueName: \"kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.081282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.081307 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.082222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.082442 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.082943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.084464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.101660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfbr\" (UniqueName: \"kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr\") pod \"dnsmasq-dns-8554648995-bh8d5\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.168883 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.228454 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="dnsmasq-dns" containerID="cri-o://2f07a103571fa28698cfe31214237075edcd2173d830bacd5b3be17e47817d3d" gracePeriod=10 Jan 29 11:44:48 crc kubenswrapper[4766]: I0129 11:44:48.228653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="dnsmasq-dns" containerID="cri-o://8d645c2b38d3968609f27f6179521ca700df90dc534bad66cd33d0aa893cf82a" gracePeriod=10 Jan 29 11:44:49 crc kubenswrapper[4766]: I0129 11:44:49.251642 4766 generic.go:334] "Generic (PLEG): container finished" podID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerID="2f07a103571fa28698cfe31214237075edcd2173d830bacd5b3be17e47817d3d" exitCode=0 Jan 29 11:44:49 crc kubenswrapper[4766]: I0129 11:44:49.251733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" event={"ID":"a3efa3a7-d212-4ae0-8f0a-47b25153393b","Type":"ContainerDied","Data":"2f07a103571fa28698cfe31214237075edcd2173d830bacd5b3be17e47817d3d"} Jan 29 11:44:49 crc kubenswrapper[4766]: I0129 11:44:49.253793 4766 generic.go:334] "Generic (PLEG): container finished" podID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerID="8d645c2b38d3968609f27f6179521ca700df90dc534bad66cd33d0aa893cf82a" exitCode=0 Jan 29 11:44:49 crc kubenswrapper[4766]: I0129 11:44:49.253831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" event={"ID":"02f91ca7-9f21-4f64-97ca-3d670aa1e439","Type":"ContainerDied","Data":"8d645c2b38d3968609f27f6179521ca700df90dc534bad66cd33d0aa893cf82a"} Jan 29 11:44:50 crc kubenswrapper[4766]: I0129 11:44:50.673883 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.94:5353: connect: connection refused" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.513399 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.564624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc\") pod \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.565133 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxtfs\" (UniqueName: \"kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs\") pod \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.565281 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config\") pod \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\" (UID: \"a3efa3a7-d212-4ae0-8f0a-47b25153393b\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.570021 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs" (OuterVolumeSpecName: "kube-api-access-bxtfs") pod "a3efa3a7-d212-4ae0-8f0a-47b25153393b" (UID: "a3efa3a7-d212-4ae0-8f0a-47b25153393b"). InnerVolumeSpecName "kube-api-access-bxtfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.611672 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3efa3a7-d212-4ae0-8f0a-47b25153393b" (UID: "a3efa3a7-d212-4ae0-8f0a-47b25153393b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.625717 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config" (OuterVolumeSpecName: "config") pod "a3efa3a7-d212-4ae0-8f0a-47b25153393b" (UID: "a3efa3a7-d212-4ae0-8f0a-47b25153393b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.668024 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxtfs\" (UniqueName: \"kubernetes.io/projected/a3efa3a7-d212-4ae0-8f0a-47b25153393b-kube-api-access-bxtfs\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.668066 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.668075 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3efa3a7-d212-4ae0-8f0a-47b25153393b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.860436 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.971997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtnbs\" (UniqueName: \"kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs\") pod \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.972127 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc\") pod \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.972239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config\") pod \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\" (UID: \"02f91ca7-9f21-4f64-97ca-3d670aa1e439\") " Jan 29 11:44:52 crc kubenswrapper[4766]: I0129 11:44:52.975494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs" (OuterVolumeSpecName: "kube-api-access-gtnbs") pod "02f91ca7-9f21-4f64-97ca-3d670aa1e439" (UID: "02f91ca7-9f21-4f64-97ca-3d670aa1e439"). InnerVolumeSpecName "kube-api-access-gtnbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.006603 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02f91ca7-9f21-4f64-97ca-3d670aa1e439" (UID: "02f91ca7-9f21-4f64-97ca-3d670aa1e439"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.007550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config" (OuterVolumeSpecName: "config") pod "02f91ca7-9f21-4f64-97ca-3d670aa1e439" (UID: "02f91ca7-9f21-4f64-97ca-3d670aa1e439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.073984 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtnbs\" (UniqueName: \"kubernetes.io/projected/02f91ca7-9f21-4f64-97ca-3d670aa1e439-kube-api-access-gtnbs\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.074021 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.074031 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02f91ca7-9f21-4f64-97ca-3d670aa1e439-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.285154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" event={"ID":"02f91ca7-9f21-4f64-97ca-3d670aa1e439","Type":"ContainerDied","Data":"a275c8d368fb555a7d689a88372240ca819df9a9d350ecfa837534c66dba17ff"} Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.285227 4766 scope.go:117] "RemoveContainer" containerID="8d645c2b38d3968609f27f6179521ca700df90dc534bad66cd33d0aa893cf82a" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.285814 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fzpsl" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.287737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" event={"ID":"a3efa3a7-d212-4ae0-8f0a-47b25153393b","Type":"ContainerDied","Data":"f26424cac0e822ae30593b2e607e1e2cda3566c61df40911cc87db55f9df1905"} Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.287793 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.321712 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:53 crc kubenswrapper[4766]: E0129 11:44:53.325402 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f91ca7_9f21_4f64_97ca_3d670aa1e439.slice/crio-a275c8d368fb555a7d689a88372240ca819df9a9d350ecfa837534c66dba17ff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3efa3a7_d212_4ae0_8f0a_47b25153393b.slice\": RecentStats: unable to find data in memory cache]" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.332469 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-njcbw"] Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.338868 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.344626 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fzpsl"] Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.488618 4766 scope.go:117] "RemoveContainer" containerID="ccb294817b1b2e24b0fb96cb398a8e43dd625150fa3f6ddb01683dd087ae0e29" Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.776388 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.904742 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:44:53 crc kubenswrapper[4766]: I0129 11:44:53.979682 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:44:54 crc kubenswrapper[4766]: I0129 11:44:54.355428 4766 scope.go:117] "RemoveContainer" containerID="2f07a103571fa28698cfe31214237075edcd2173d830bacd5b3be17e47817d3d" Jan 29 11:44:54 crc kubenswrapper[4766]: W0129 11:44:54.373808 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode45b3f77_28fa_4188_b58c_b50cebb7fed6.slice/crio-a2a3e87bee12f8489c67cf49b9d57c4420a7256f2e686d11cff890966849d89c WatchSource:0}: Error finding container a2a3e87bee12f8489c67cf49b9d57c4420a7256f2e686d11cff890966849d89c: Status 404 returned error can't find the container with id a2a3e87bee12f8489c67cf49b9d57c4420a7256f2e686d11cff890966849d89c Jan 29 11:44:54 crc kubenswrapper[4766]: W0129 11:44:54.374511 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefb6fef_3db5_4137_a250_9e20054fe48a.slice/crio-d910eaa24b5ea235f1db7998854c8bae53c27e04c7367a34c8b3098f58423927 WatchSource:0}: Error finding container d910eaa24b5ea235f1db7998854c8bae53c27e04c7367a34c8b3098f58423927: Status 404 returned error can't find the container with id d910eaa24b5ea235f1db7998854c8bae53c27e04c7367a34c8b3098f58423927 Jan 29 11:44:54 crc kubenswrapper[4766]: I0129 11:44:54.476756 4766 scope.go:117] "RemoveContainer" containerID="a5ebc7120991976ef12d27b6c5118484e479e63351d1d238e18d542010658411" Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.240058 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" path="/var/lib/kubelet/pods/02f91ca7-9f21-4f64-97ca-3d670aa1e439/volumes" Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.241441 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" path="/var/lib/kubelet/pods/a3efa3a7-d212-4ae0-8f0a-47b25153393b/volumes" Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.322172 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pmh5k" event={"ID":"defb6fef-3db5-4137-a250-9e20054fe48a","Type":"ContainerStarted","Data":"d910eaa24b5ea235f1db7998854c8bae53c27e04c7367a34c8b3098f58423927"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.324319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerStarted","Data":"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.327261 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" event={"ID":"e45b3f77-28fa-4188-b58c-b50cebb7fed6","Type":"ContainerStarted","Data":"a2a3e87bee12f8489c67cf49b9d57c4420a7256f2e686d11cff890966849d89c"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.329567 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c" event={"ID":"73cf0e15-caab-4cea-94b5-7470d635d767","Type":"ContainerStarted","Data":"36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.330028 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5kz4c" Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.331715 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerStarted","Data":"660cba457ee644f9f74e7f4d4669bb71ee8bdd88f5e73291cc114e7814b6fa5b"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.333339 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bh8d5" event={"ID":"53499ab2-7d33-4eb2-88da-fc49dc29009f","Type":"ContainerStarted","Data":"9b0c5cdc295abc498938c1ed4d9ce2bb72b47dd7fbc7ca82ebc0cac3c60d3e21"} Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.375506 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-njcbw" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.93:5353: i/o timeout" Jan 29 11:44:55 crc kubenswrapper[4766]: I0129 11:44:55.399665 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5kz4c" podStartSLOduration=6.484054761 podStartE2EDuration="15.399647464s" podCreationTimestamp="2026-01-29 11:44:40 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.862586122 +0000 UTC m=+1421.974979133" lastFinishedPulling="2026-01-29 11:44:53.778178825 +0000 UTC m=+1430.890571836" observedRunningTime="2026-01-29 11:44:55.391891719 +0000 UTC m=+1432.504284730" watchObservedRunningTime="2026-01-29 11:44:55.399647464 +0000 UTC m=+1432.512040475" Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.342353 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerStarted","Data":"35d741477652fd2fdab85e5a190f27cf16637cca6d3186932dfe4f9ff8c8c1c1"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.345070 4766 generic.go:334] "Generic (PLEG): container finished" podID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerID="fe6564bf18b28c2f59cd28247181d3e0371caa4452b0fe6b1e974e25aee74d55" exitCode=0 Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.345332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" event={"ID":"e45b3f77-28fa-4188-b58c-b50cebb7fed6","Type":"ContainerDied","Data":"fe6564bf18b28c2f59cd28247181d3e0371caa4452b0fe6b1e974e25aee74d55"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.354111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerStarted","Data":"a7bd65c4cb6402ca31a9d412ea5ab09924e3681dbdd63afcca07deade4b71a0b"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.355595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerStarted","Data":"8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.357614 4766 generic.go:334] "Generic (PLEG): container finished" podID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerID="f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64" exitCode=0 Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.357681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bh8d5" event={"ID":"53499ab2-7d33-4eb2-88da-fc49dc29009f","Type":"ContainerDied","Data":"f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.359543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d00d673d-aea5-4014-8e2b-bcb78afb7606","Type":"ContainerStarted","Data":"f90c3671694662e6b9f1584abc9bd6ae5dd46f25e77b8df0cd377c69033dc174"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.360035 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.367922 4766 generic.go:334] "Generic (PLEG): container finished" podID="be830961-a6c3-4340-a134-ea20de96b31b" containerID="e76508553331ee93028854abc43e1fdbfc214e061ac1372339c39e9dd3e3651f" exitCode=0 Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.368136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerDied","Data":"e76508553331ee93028854abc43e1fdbfc214e061ac1372339c39e9dd3e3651f"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.379874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4","Type":"ContainerStarted","Data":"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.380635 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.383246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerStarted","Data":"aa69d365dd52beaeca5420f2ec0d4a643b3863f2b22c8b2c4958a5c03855b17f"} Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.531659 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.929834278 podStartE2EDuration="22.531638698s" podCreationTimestamp="2026-01-29 11:44:34 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.863833117 +0000 UTC m=+1421.976226128" lastFinishedPulling="2026-01-29 11:44:53.465637537 +0000 UTC m=+1430.578030548" observedRunningTime="2026-01-29 11:44:56.502781008 +0000 UTC m=+1433.615174029" watchObservedRunningTime="2026-01-29 11:44:56.531638698 +0000 UTC m=+1433.644031709" Jan 29 11:44:56 crc kubenswrapper[4766]: I0129 11:44:56.551248 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.688288786 podStartE2EDuration="20.551227442s" podCreationTimestamp="2026-01-29 11:44:36 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.713788675 +0000 UTC m=+1421.826181686" lastFinishedPulling="2026-01-29 11:44:54.576727331 +0000 UTC m=+1431.689120342" observedRunningTime="2026-01-29 11:44:56.522554126 +0000 UTC m=+1433.634947147" watchObservedRunningTime="2026-01-29 11:44:56.551227442 +0000 UTC m=+1433.663620453" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.396261 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pmh5k" event={"ID":"defb6fef-3db5-4137-a250-9e20054fe48a","Type":"ContainerStarted","Data":"a5c7f6df0612c9199b8577076ad9f5fb9025e0093bf6d0900000b45ecd6a38b9"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.398508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" event={"ID":"e45b3f77-28fa-4188-b58c-b50cebb7fed6","Type":"ContainerStarted","Data":"b26cb996f17e745aabd0babff95264727869495ce37b1a7b3616ea49b826f181"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.398647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.401783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerStarted","Data":"6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.401824 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerStarted","Data":"ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.401915 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.404016 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerStarted","Data":"244347370c5e70ae119750458317e88ba78b6c9c01068f9f4942f415f38e3b6c"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.407557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerStarted","Data":"67a037dad4e638172b7099712b789cf884049f6cc0a4510c0636f1f4a13a2e4a"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.414594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bh8d5" event={"ID":"53499ab2-7d33-4eb2-88da-fc49dc29009f","Type":"ContainerStarted","Data":"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5"} Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.414641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.415950 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-pmh5k" podStartSLOduration=8.592475411 podStartE2EDuration="10.415937483s" podCreationTimestamp="2026-01-29 11:44:47 +0000 UTC" firstStartedPulling="2026-01-29 11:44:54.453516894 +0000 UTC m=+1431.565909905" lastFinishedPulling="2026-01-29 11:44:56.276978966 +0000 UTC m=+1433.389371977" observedRunningTime="2026-01-29 11:44:57.415029948 +0000 UTC m=+1434.527422959" watchObservedRunningTime="2026-01-29 11:44:57.415937483 +0000 UTC m=+1434.528330524" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.450393 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.086422243 podStartE2EDuration="15.450375168s" podCreationTimestamp="2026-01-29 11:44:42 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.897099839 +0000 UTC m=+1422.009492850" lastFinishedPulling="2026-01-29 11:44:56.261052764 +0000 UTC m=+1433.373445775" observedRunningTime="2026-01-29 11:44:57.445328958 +0000 UTC m=+1434.557721979" watchObservedRunningTime="2026-01-29 11:44:57.450375168 +0000 UTC m=+1434.562768179" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.473029 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" podStartSLOduration=10.473011396 podStartE2EDuration="10.473011396s" podCreationTimestamp="2026-01-29 11:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:44:57.468595454 +0000 UTC m=+1434.580988465" watchObservedRunningTime="2026-01-29 11:44:57.473011396 +0000 UTC m=+1434.585404397" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.498193 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2gh2n" podStartSLOduration=9.654127709 podStartE2EDuration="17.498175994s" podCreationTimestamp="2026-01-29 11:44:40 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.947779305 +0000 UTC m=+1422.060172316" lastFinishedPulling="2026-01-29 11:44:52.79182759 +0000 UTC m=+1429.904220601" observedRunningTime="2026-01-29 11:44:57.489829573 +0000 UTC m=+1434.602222594" watchObservedRunningTime="2026-01-29 11:44:57.498175994 +0000 UTC m=+1434.610569005" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.516821 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.292014439999999 podStartE2EDuration="18.516804391s" podCreationTimestamp="2026-01-29 11:44:39 +0000 UTC" firstStartedPulling="2026-01-29 11:44:46.050680223 +0000 UTC m=+1423.163073234" lastFinishedPulling="2026-01-29 11:44:56.275470174 +0000 UTC m=+1433.387863185" observedRunningTime="2026-01-29 11:44:57.511150824 +0000 UTC m=+1434.623543835" watchObservedRunningTime="2026-01-29 11:44:57.516804391 +0000 UTC m=+1434.629197402" Jan 29 11:44:57 crc kubenswrapper[4766]: I0129 11:44:57.542477 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-bh8d5" podStartSLOduration=10.542458272 podStartE2EDuration="10.542458272s" podCreationTimestamp="2026-01-29 11:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:44:57.537489564 +0000 UTC m=+1434.649882575" watchObservedRunningTime="2026-01-29 11:44:57.542458272 +0000 UTC m=+1434.654851293" Jan 29 11:44:58 crc kubenswrapper[4766]: I0129 11:44:58.418818 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:44:58 crc kubenswrapper[4766]: I0129 11:44:58.684596 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:58 crc kubenswrapper[4766]: I0129 11:44:58.684647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:58 crc kubenswrapper[4766]: I0129 11:44:58.724820 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 11:44:59 crc kubenswrapper[4766]: I0129 11:44:59.036549 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:59 crc kubenswrapper[4766]: I0129 11:44:59.076610 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 11:44:59 crc kubenswrapper[4766]: I0129 11:44:59.424940 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.134556 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5"] Jan 29 11:45:00 crc kubenswrapper[4766]: E0129 11:45:00.134957 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.134984 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: E0129 11:45:00.135028 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135038 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: E0129 11:45:00.135054 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="init" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135061 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="init" Jan 29 11:45:00 crc kubenswrapper[4766]: E0129 11:45:00.135091 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="init" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135100 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="init" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135286 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3efa3a7-d212-4ae0-8f0a-47b25153393b" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f91ca7-9f21-4f64-97ca-3d670aa1e439" containerName="dnsmasq-dns" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.135958 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.138874 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.144098 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.148914 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5"] Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.205350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6g7\" (UniqueName: \"kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.205690 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.205838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.314630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f6g7\" (UniqueName: \"kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.314803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.314856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.316318 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.325980 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.341835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f6g7\" (UniqueName: \"kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7\") pod \"collect-profiles-29494785-q7tw5\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.433238 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f673618-4b7d-47e5-84af-092c995bca8e" containerID="4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad" exitCode=0 Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.433326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerDied","Data":"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad"} Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.436190 4766 generic.go:334] "Generic (PLEG): container finished" podID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerID="660cba457ee644f9f74e7f4d4669bb71ee8bdd88f5e73291cc114e7814b6fa5b" exitCode=0 Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.436261 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerDied","Data":"660cba457ee644f9f74e7f4d4669bb71ee8bdd88f5e73291cc114e7814b6fa5b"} Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.461558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.488294 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.488909 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.770266 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.775780 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.781959 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.786333 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.786618 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.786968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rr4bd" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.787200 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823210 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823267 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrrd\" (UniqueName: \"kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.823353 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925732 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrrd\" (UniqueName: \"kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.925793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.926828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.926873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.927244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.931259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.932012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.936100 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:00 crc kubenswrapper[4766]: I0129 11:45:00.955801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrrd\" (UniqueName: \"kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd\") pod \"ovn-northd-0\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " pod="openstack/ovn-northd-0" Jan 29 11:45:01 crc kubenswrapper[4766]: I0129 11:45:01.096011 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.063652 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.170897 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.222922 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.476604 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="dnsmasq-dns" containerID="cri-o://b26cb996f17e745aabd0babff95264727869495ce37b1a7b3616ea49b826f181" gracePeriod=10 Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.826268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:45:03 crc kubenswrapper[4766]: W0129 11:45:03.834231 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9453e394_ed9c_4d36_b200_e559e620a7f7.slice/crio-9bf188dd9af5f4187ffaf61b998ad1081a1310d3088ae50894cf97eac20e1e2c WatchSource:0}: Error finding container 9bf188dd9af5f4187ffaf61b998ad1081a1310d3088ae50894cf97eac20e1e2c: Status 404 returned error can't find the container with id 9bf188dd9af5f4187ffaf61b998ad1081a1310d3088ae50894cf97eac20e1e2c Jan 29 11:45:03 crc kubenswrapper[4766]: I0129 11:45:03.905303 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5"] Jan 29 11:45:04 crc kubenswrapper[4766]: I0129 11:45:04.483608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" event={"ID":"21ea8759-855d-46d1-84d4-4e96bd6efaa3","Type":"ContainerStarted","Data":"317736784fdc517341ed02b194156eb96a5258abe4e64e106c9b58d9ff33c45b"} Jan 29 11:45:04 crc kubenswrapper[4766]: I0129 11:45:04.484783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerStarted","Data":"9bf188dd9af5f4187ffaf61b998ad1081a1310d3088ae50894cf97eac20e1e2c"} Jan 29 11:45:04 crc kubenswrapper[4766]: I0129 11:45:04.972684 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.492550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerStarted","Data":"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844"} Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.495600 4766 generic.go:334] "Generic (PLEG): container finished" podID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerID="b26cb996f17e745aabd0babff95264727869495ce37b1a7b3616ea49b826f181" exitCode=0 Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.495680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" event={"ID":"e45b3f77-28fa-4188-b58c-b50cebb7fed6","Type":"ContainerDied","Data":"b26cb996f17e745aabd0babff95264727869495ce37b1a7b3616ea49b826f181"} Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.498477 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerStarted","Data":"924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7"} Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.500325 4766 generic.go:334] "Generic (PLEG): container finished" podID="21ea8759-855d-46d1-84d4-4e96bd6efaa3" containerID="686fb13b9862181d381212664fdc4b66f55608241d71909cd4accb8b0f085a9b" exitCode=0 Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.500437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" event={"ID":"21ea8759-855d-46d1-84d4-4e96bd6efaa3","Type":"ContainerDied","Data":"686fb13b9862181d381212664fdc4b66f55608241d71909cd4accb8b0f085a9b"} Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.527876 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.031819492 podStartE2EDuration="32.527855896s" podCreationTimestamp="2026-01-29 11:44:33 +0000 UTC" firstStartedPulling="2026-01-29 11:44:42.797600823 +0000 UTC m=+1419.909993834" lastFinishedPulling="2026-01-29 11:44:53.293637227 +0000 UTC m=+1430.406030238" observedRunningTime="2026-01-29 11:45:05.512046048 +0000 UTC m=+1442.624439049" watchObservedRunningTime="2026-01-29 11:45:05.527855896 +0000 UTC m=+1442.640248907" Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.538087 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.525764035 podStartE2EDuration="34.5380662s" podCreationTimestamp="2026-01-29 11:44:31 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.453903808 +0000 UTC m=+1421.566296819" lastFinishedPulling="2026-01-29 11:44:53.466205963 +0000 UTC m=+1430.578598984" observedRunningTime="2026-01-29 11:45:05.53483203 +0000 UTC m=+1442.647225041" watchObservedRunningTime="2026-01-29 11:45:05.5380662 +0000 UTC m=+1442.650459211" Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.918728 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.939801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhnz7\" (UniqueName: \"kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7\") pod \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.940016 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb\") pod \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.940158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc\") pod \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.940197 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config\") pod \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\" (UID: \"e45b3f77-28fa-4188-b58c-b50cebb7fed6\") " Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.947080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7" (OuterVolumeSpecName: "kube-api-access-hhnz7") pod "e45b3f77-28fa-4188-b58c-b50cebb7fed6" (UID: "e45b3f77-28fa-4188-b58c-b50cebb7fed6"). InnerVolumeSpecName "kube-api-access-hhnz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:05 crc kubenswrapper[4766]: I0129 11:45:05.995125 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config" (OuterVolumeSpecName: "config") pod "e45b3f77-28fa-4188-b58c-b50cebb7fed6" (UID: "e45b3f77-28fa-4188-b58c-b50cebb7fed6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.000938 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e45b3f77-28fa-4188-b58c-b50cebb7fed6" (UID: "e45b3f77-28fa-4188-b58c-b50cebb7fed6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.018733 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e45b3f77-28fa-4188-b58c-b50cebb7fed6" (UID: "e45b3f77-28fa-4188-b58c-b50cebb7fed6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.043016 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.043398 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.043436 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45b3f77-28fa-4188-b58c-b50cebb7fed6-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.043450 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhnz7\" (UniqueName: \"kubernetes.io/projected/e45b3f77-28fa-4188-b58c-b50cebb7fed6-kube-api-access-hhnz7\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.510629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" event={"ID":"e45b3f77-28fa-4188-b58c-b50cebb7fed6","Type":"ContainerDied","Data":"a2a3e87bee12f8489c67cf49b9d57c4420a7256f2e686d11cff890966849d89c"} Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.510682 4766 scope.go:117] "RemoveContainer" containerID="b26cb996f17e745aabd0babff95264727869495ce37b1a7b3616ea49b826f181" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.510651 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-6jx2g" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.513472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerStarted","Data":"7859ad51abd1137169edd5bae5e4945e15a36ed89a66747e6ac27ac9476ded8b"} Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.513506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerStarted","Data":"587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67"} Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.538146 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.574816716 podStartE2EDuration="6.538124575s" podCreationTimestamp="2026-01-29 11:45:00 +0000 UTC" firstStartedPulling="2026-01-29 11:45:03.836333785 +0000 UTC m=+1440.948726796" lastFinishedPulling="2026-01-29 11:45:05.799641644 +0000 UTC m=+1442.912034655" observedRunningTime="2026-01-29 11:45:06.533326592 +0000 UTC m=+1443.645719613" watchObservedRunningTime="2026-01-29 11:45:06.538124575 +0000 UTC m=+1443.650517586" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.541799 4766 scope.go:117] "RemoveContainer" containerID="fe6564bf18b28c2f59cd28247181d3e0371caa4452b0fe6b1e974e25aee74d55" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.556576 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.563334 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-6jx2g"] Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.849544 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.955312 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume\") pod \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.955465 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume\") pod \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.955514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f6g7\" (UniqueName: \"kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7\") pod \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\" (UID: \"21ea8759-855d-46d1-84d4-4e96bd6efaa3\") " Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.956175 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume" (OuterVolumeSpecName: "config-volume") pod "21ea8759-855d-46d1-84d4-4e96bd6efaa3" (UID: "21ea8759-855d-46d1-84d4-4e96bd6efaa3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.964565 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "21ea8759-855d-46d1-84d4-4e96bd6efaa3" (UID: "21ea8759-855d-46d1-84d4-4e96bd6efaa3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:06 crc kubenswrapper[4766]: I0129 11:45:06.967915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7" (OuterVolumeSpecName: "kube-api-access-5f6g7") pod "21ea8759-855d-46d1-84d4-4e96bd6efaa3" (UID: "21ea8759-855d-46d1-84d4-4e96bd6efaa3"). InnerVolumeSpecName "kube-api-access-5f6g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.057646 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f6g7\" (UniqueName: \"kubernetes.io/projected/21ea8759-855d-46d1-84d4-4e96bd6efaa3-kube-api-access-5f6g7\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.057688 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21ea8759-855d-46d1-84d4-4e96bd6efaa3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.057711 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ea8759-855d-46d1-84d4-4e96bd6efaa3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.073214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169087 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:07 crc kubenswrapper[4766]: E0129 11:45:07.169491 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="init" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169514 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="init" Jan 29 11:45:07 crc kubenswrapper[4766]: E0129 11:45:07.169539 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="dnsmasq-dns" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169548 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="dnsmasq-dns" Jan 29 11:45:07 crc kubenswrapper[4766]: E0129 11:45:07.169578 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ea8759-855d-46d1-84d4-4e96bd6efaa3" containerName="collect-profiles" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169586 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ea8759-855d-46d1-84d4-4e96bd6efaa3" containerName="collect-profiles" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169766 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" containerName="dnsmasq-dns" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.169796 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ea8759-855d-46d1-84d4-4e96bd6efaa3" containerName="collect-profiles" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.170894 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.206468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.239605 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45b3f77-28fa-4188-b58c-b50cebb7fed6" path="/var/lib/kubelet/pods/e45b3f77-28fa-4188-b58c-b50cebb7fed6/volumes" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.260230 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.260303 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzp77\" (UniqueName: \"kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.260347 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.260382 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.260426 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.361802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.361864 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzp77\" (UniqueName: \"kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.361893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.361920 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.361947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.362928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.363537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.364335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.364847 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.380363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzp77\" (UniqueName: \"kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77\") pod \"dnsmasq-dns-b8fbc5445-qlsth\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.522218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" event={"ID":"21ea8759-855d-46d1-84d4-4e96bd6efaa3","Type":"ContainerDied","Data":"317736784fdc517341ed02b194156eb96a5258abe4e64e106c9b58d9ff33c45b"} Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.522538 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317736784fdc517341ed02b194156eb96a5258abe4e64e106c9b58d9ff33c45b" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.522558 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.522313 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-q7tw5" Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.524325 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:07 crc kubenswrapper[4766]: W0129 11:45:07.956869 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb055f99f_ca12_47e3_9448_240b2f46ccb3.slice/crio-fc1a11b00cddbba14b395e9ec8d8012c4a97046bdfe9aa7e1c0fe143166465fc WatchSource:0}: Error finding container fc1a11b00cddbba14b395e9ec8d8012c4a97046bdfe9aa7e1c0fe143166465fc: Status 404 returned error can't find the container with id fc1a11b00cddbba14b395e9ec8d8012c4a97046bdfe9aa7e1c0fe143166465fc Jan 29 11:45:07 crc kubenswrapper[4766]: I0129 11:45:07.965031 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.274223 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.280537 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.283788 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.283814 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.283874 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dtt4c" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.289102 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.302260 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378108 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.378286 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjms\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.479669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjms\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.480174 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.480209 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.480278 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift podName:c299dfaa-12db-4482-ab89-55ba85b8e2a7 nodeName:}" failed. No retries permitted until 2026-01-29 11:45:08.980252838 +0000 UTC m=+1446.092645859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift") pod "swift-storage-0" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7") : configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.480438 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.480455 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.480485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.489164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.497565 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjms\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.500051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.532590 4766 generic.go:334] "Generic (PLEG): container finished" podID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerID="3235d14f10e320a00952adb57cee9f433f625006dd05ba3746a723cd72eb3673" exitCode=0 Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.532728 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" event={"ID":"b055f99f-ca12-47e3-9448-240b2f46ccb3","Type":"ContainerDied","Data":"3235d14f10e320a00952adb57cee9f433f625006dd05ba3746a723cd72eb3673"} Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.532788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" event={"ID":"b055f99f-ca12-47e3-9448-240b2f46ccb3","Type":"ContainerStarted","Data":"fc1a11b00cddbba14b395e9ec8d8012c4a97046bdfe9aa7e1c0fe143166465fc"} Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.713145 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-skcrx"] Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.714285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.723398 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-skcrx"] Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.745447 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.745845 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.746028 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.887483 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888251 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdj6\" (UniqueName: \"kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.888299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989732 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989864 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdj6\" (UniqueName: \"kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.989986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.990021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.990835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.992614 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.992654 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: E0129 11:45:08.992715 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift podName:c299dfaa-12db-4482-ab89-55ba85b8e2a7 nodeName:}" failed. No retries permitted until 2026-01-29 11:45:09.992695009 +0000 UTC m=+1447.105088080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift") pod "swift-storage-0" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7") : configmap "swift-ring-files" not found Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.992868 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.993205 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.994857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:08 crc kubenswrapper[4766]: I0129 11:45:08.997704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.007209 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.013017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdj6\" (UniqueName: \"kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6\") pod \"swift-ring-rebalance-skcrx\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.090965 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.541783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" event={"ID":"b055f99f-ca12-47e3-9448-240b2f46ccb3","Type":"ContainerStarted","Data":"a6e5ff7f659bca0a3ea5b18214bc4101db7a833041c25e66f73799878e50c43d"} Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.543538 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.557685 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-skcrx"] Jan 29 11:45:09 crc kubenswrapper[4766]: I0129 11:45:09.576560 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" podStartSLOduration=2.576542542 podStartE2EDuration="2.576542542s" podCreationTimestamp="2026-01-29 11:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:09.570845344 +0000 UTC m=+1446.683238365" watchObservedRunningTime="2026-01-29 11:45:09.576542542 +0000 UTC m=+1446.688935553" Jan 29 11:45:10 crc kubenswrapper[4766]: I0129 11:45:10.011264 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:10 crc kubenswrapper[4766]: E0129 11:45:10.011533 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:45:10 crc kubenswrapper[4766]: E0129 11:45:10.011999 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:45:10 crc kubenswrapper[4766]: E0129 11:45:10.012787 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift podName:c299dfaa-12db-4482-ab89-55ba85b8e2a7 nodeName:}" failed. No retries permitted until 2026-01-29 11:45:12.012122512 +0000 UTC m=+1449.124515523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift") pod "swift-storage-0" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7") : configmap "swift-ring-files" not found Jan 29 11:45:10 crc kubenswrapper[4766]: I0129 11:45:10.552128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-skcrx" event={"ID":"628d9a82-bc49-44b5-a259-9d7f39bcb803","Type":"ContainerStarted","Data":"66abf514f6b2939250194ca511b892a1b0ccbfd717d7a1ab17ce00489511ab17"} Jan 29 11:45:12 crc kubenswrapper[4766]: I0129 11:45:12.042459 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:12 crc kubenswrapper[4766]: E0129 11:45:12.042892 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:45:12 crc kubenswrapper[4766]: E0129 11:45:12.042925 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:45:12 crc kubenswrapper[4766]: E0129 11:45:12.042985 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift podName:c299dfaa-12db-4482-ab89-55ba85b8e2a7 nodeName:}" failed. No retries permitted until 2026-01-29 11:45:16.042964515 +0000 UTC m=+1453.155357526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift") pod "swift-storage-0" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7") : configmap "swift-ring-files" not found Jan 29 11:45:12 crc kubenswrapper[4766]: I0129 11:45:12.998267 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 11:45:12 crc kubenswrapper[4766]: I0129 11:45:12.998307 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 11:45:13 crc kubenswrapper[4766]: I0129 11:45:13.076227 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 11:45:13 crc kubenswrapper[4766]: I0129 11:45:13.589444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-skcrx" event={"ID":"628d9a82-bc49-44b5-a259-9d7f39bcb803","Type":"ContainerStarted","Data":"7e858737daba72926bb1c1a68da1eac711ef60cc06bf99c8cbce6410dc3a5bde"} Jan 29 11:45:13 crc kubenswrapper[4766]: I0129 11:45:13.610470 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-skcrx" podStartSLOduration=2.187420674 podStartE2EDuration="5.610450607s" podCreationTimestamp="2026-01-29 11:45:08 +0000 UTC" firstStartedPulling="2026-01-29 11:45:09.561300359 +0000 UTC m=+1446.673693380" lastFinishedPulling="2026-01-29 11:45:12.984330302 +0000 UTC m=+1450.096723313" observedRunningTime="2026-01-29 11:45:13.605909571 +0000 UTC m=+1450.718302602" watchObservedRunningTime="2026-01-29 11:45:13.610450607 +0000 UTC m=+1450.722843628" Jan 29 11:45:13 crc kubenswrapper[4766]: I0129 11:45:13.668451 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.594292 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d3fe-account-create-update-zjmd9"] Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.595778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.598319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.606851 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d3fe-account-create-update-zjmd9"] Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.644980 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hlq6m"] Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.646086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.651652 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hlq6m"] Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.698265 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsk5\" (UniqueName: \"kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.698387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.765644 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.766324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.800028 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.800092 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsk5\" (UniqueName: \"kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.800226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.800392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlqw9\" (UniqueName: \"kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.801340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.832126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsk5\" (UniqueName: \"kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5\") pod \"keystone-d3fe-account-create-update-zjmd9\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.836874 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.893500 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vc9jp"] Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.894646 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.902399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.902722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlqw9\" (UniqueName: \"kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.904355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.921910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.966277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlqw9\" (UniqueName: \"kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9\") pod \"keystone-db-create-hlq6m\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:14 crc kubenswrapper[4766]: I0129 11:45:14.975955 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vc9jp"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.005080 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcjc6\" (UniqueName: \"kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.005211 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.059859 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fb14-account-create-update-ndzx8"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.062229 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.065315 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.107863 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcjc6\" (UniqueName: \"kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.108007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.108960 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.116519 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb14-account-create-update-ndzx8"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.140991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcjc6\" (UniqueName: \"kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6\") pod \"placement-db-create-vc9jp\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.197999 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8zbms"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.199364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.209232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8zbms"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.210273 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.210318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728n8\" (UniqueName: \"kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.229784 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.261178 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.315324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.315444 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728n8\" (UniqueName: \"kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.315512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsgc\" (UniqueName: \"kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.315653 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.316151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.318537 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4e41-account-create-update-p9lr6"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.319894 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.323633 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4e41-account-create-update-p9lr6"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.325141 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.337966 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728n8\" (UniqueName: \"kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8\") pod \"placement-fb14-account-create-update-ndzx8\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.382124 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.417732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.418795 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vmr\" (UniqueName: \"kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.418925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsgc\" (UniqueName: \"kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.419192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.420114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.440146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsgc\" (UniqueName: \"kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc\") pod \"glance-db-create-8zbms\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.520442 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.520516 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vmr\" (UniqueName: \"kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.521655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.540940 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8zbms" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.546234 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vmr\" (UniqueName: \"kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr\") pod \"glance-4e41-account-create-update-p9lr6\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.632122 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d3fe-account-create-update-zjmd9"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.643873 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.726307 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vc9jp"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.736007 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.833348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hlq6m"] Jan 29 11:45:15 crc kubenswrapper[4766]: I0129 11:45:15.935990 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb14-account-create-update-ndzx8"] Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.053298 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:16 crc kubenswrapper[4766]: E0129 11:45:16.053679 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:45:16 crc kubenswrapper[4766]: E0129 11:45:16.053720 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:45:16 crc kubenswrapper[4766]: E0129 11:45:16.053768 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift podName:c299dfaa-12db-4482-ab89-55ba85b8e2a7 nodeName:}" failed. No retries permitted until 2026-01-29 11:45:24.053753209 +0000 UTC m=+1461.166146220 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift") pod "swift-storage-0" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7") : configmap "swift-ring-files" not found Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.111038 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8zbms"] Jan 29 11:45:16 crc kubenswrapper[4766]: W0129 11:45:16.118552 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20510388_23e4_4945_a5d5_db74a909518c.slice/crio-dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324 WatchSource:0}: Error finding container dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324: Status 404 returned error can't find the container with id dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.350664 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4e41-account-create-update-p9lr6"] Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.364049 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.364103 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.364150 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.364914 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.364986 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130" gracePeriod=600 Jan 29 11:45:16 crc kubenswrapper[4766]: W0129 11:45:16.407916 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5dc4d47_9e72_4287_be4f_176017f5c41a.slice/crio-bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971 WatchSource:0}: Error finding container bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971: Status 404 returned error can't find the container with id bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.643378 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cf90405-9df9-4821-831d-f6bb66f3268e" containerID="55b6f91779e615e2274808931072eec9e0ba72e68ba23e7d95cb2a545b23ea53" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.643454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d3fe-account-create-update-zjmd9" event={"ID":"7cf90405-9df9-4821-831d-f6bb66f3268e","Type":"ContainerDied","Data":"55b6f91779e615e2274808931072eec9e0ba72e68ba23e7d95cb2a545b23ea53"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.643480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d3fe-account-create-update-zjmd9" event={"ID":"7cf90405-9df9-4821-831d-f6bb66f3268e","Type":"ContainerStarted","Data":"73574ade2772b7619fb733b5fe67355905dccf5de22c11eb07081ee97ac37650"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.645579 4766 generic.go:334] "Generic (PLEG): container finished" podID="20510388-23e4-4945-a5d5-db74a909518c" containerID="1094875acb955bd1cd45b0f01cd633b759da7b9e7cfdb05fda344de26ec577fd" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.645700 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8zbms" event={"ID":"20510388-23e4-4945-a5d5-db74a909518c","Type":"ContainerDied","Data":"1094875acb955bd1cd45b0f01cd633b759da7b9e7cfdb05fda344de26ec577fd"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.645717 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8zbms" event={"ID":"20510388-23e4-4945-a5d5-db74a909518c","Type":"ContainerStarted","Data":"dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.648504 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.648547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.648565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.648581 4766 scope.go:117] "RemoveContainer" containerID="f2e08a09c8256dcfbd1ccad5d2946f6ff93f59cfb98c59a5e92b10bac66b9370" Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.653385 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf714f89-4e2e-43cb-9cbb-427c6270e65e" containerID="b039e316a43239d08dfa9f608708018443b16533ce721c610c2d8645a7f4a4e3" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.653486 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb14-account-create-update-ndzx8" event={"ID":"bf714f89-4e2e-43cb-9cbb-427c6270e65e","Type":"ContainerDied","Data":"b039e316a43239d08dfa9f608708018443b16533ce721c610c2d8645a7f4a4e3"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.653510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb14-account-create-update-ndzx8" event={"ID":"bf714f89-4e2e-43cb-9cbb-427c6270e65e","Type":"ContainerStarted","Data":"be30adae9f6d11531c6bebedda63242049e276b08d51bb5f9f4e45f72bb024c3"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.654986 4766 generic.go:334] "Generic (PLEG): container finished" podID="81700a7f-32e9-45dd-b223-058f4340deb4" containerID="e773f46ba927ee115db6b04e8fb94c7b75de4344027c40028c5ce426541fa4a2" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.655053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hlq6m" event={"ID":"81700a7f-32e9-45dd-b223-058f4340deb4","Type":"ContainerDied","Data":"e773f46ba927ee115db6b04e8fb94c7b75de4344027c40028c5ce426541fa4a2"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.655078 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hlq6m" event={"ID":"81700a7f-32e9-45dd-b223-058f4340deb4","Type":"ContainerStarted","Data":"59aab6ef1fec2140596e1581f80755629a6383cce337d2a6e0d4daa685ed5f05"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.656532 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce78868c-e61f-4d87-9e79-27b29a75644d" containerID="713a52694314534b488fc1a0658f9e5b34496a8bf5bc37528ef84f27656debf8" exitCode=0 Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.656576 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vc9jp" event={"ID":"ce78868c-e61f-4d87-9e79-27b29a75644d","Type":"ContainerDied","Data":"713a52694314534b488fc1a0658f9e5b34496a8bf5bc37528ef84f27656debf8"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.656594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vc9jp" event={"ID":"ce78868c-e61f-4d87-9e79-27b29a75644d","Type":"ContainerStarted","Data":"a15e431030251298249f503402fd8717ef7c6520d6cf8aea4558e4b71b6b4ea9"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.658777 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-p9lr6" event={"ID":"f5dc4d47-9e72-4287-be4f-176017f5c41a","Type":"ContainerStarted","Data":"01ce65370c489b737bf027f65e90f5d3d975fcaf5a42a03b572e4c06aeb4f944"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.658800 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-p9lr6" event={"ID":"f5dc4d47-9e72-4287-be4f-176017f5c41a","Type":"ContainerStarted","Data":"bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971"} Jan 29 11:45:16 crc kubenswrapper[4766]: I0129 11:45:16.695309 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4e41-account-create-update-p9lr6" podStartSLOduration=1.695290741 podStartE2EDuration="1.695290741s" podCreationTimestamp="2026-01-29 11:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:16.689739057 +0000 UTC m=+1453.802132068" watchObservedRunningTime="2026-01-29 11:45:16.695290741 +0000 UTC m=+1453.807683752" Jan 29 11:45:17 crc kubenswrapper[4766]: I0129 11:45:17.526606 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:17 crc kubenswrapper[4766]: I0129 11:45:17.569667 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:45:17 crc kubenswrapper[4766]: I0129 11:45:17.569940 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-bh8d5" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="dnsmasq-dns" containerID="cri-o://1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5" gracePeriod=10 Jan 29 11:45:17 crc kubenswrapper[4766]: I0129 11:45:17.672951 4766 generic.go:334] "Generic (PLEG): container finished" podID="f5dc4d47-9e72-4287-be4f-176017f5c41a" containerID="01ce65370c489b737bf027f65e90f5d3d975fcaf5a42a03b572e4c06aeb4f944" exitCode=0 Jan 29 11:45:17 crc kubenswrapper[4766]: I0129 11:45:17.673074 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-p9lr6" event={"ID":"f5dc4d47-9e72-4287-be4f-176017f5c41a","Type":"ContainerDied","Data":"01ce65370c489b737bf027f65e90f5d3d975fcaf5a42a03b572e4c06aeb4f944"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.132093 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.300958 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf714f89-4e2e-43cb-9cbb-427c6270e65e" (UID: "bf714f89-4e2e-43cb-9cbb-427c6270e65e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.307667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts\") pod \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.307795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728n8\" (UniqueName: \"kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8\") pod \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\" (UID: \"bf714f89-4e2e-43cb-9cbb-427c6270e65e\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.308517 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf714f89-4e2e-43cb-9cbb-427c6270e65e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.327682 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8" (OuterVolumeSpecName: "kube-api-access-728n8") pod "bf714f89-4e2e-43cb-9cbb-427c6270e65e" (UID: "bf714f89-4e2e-43cb-9cbb-427c6270e65e"). InnerVolumeSpecName "kube-api-access-728n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.409600 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-728n8\" (UniqueName: \"kubernetes.io/projected/bf714f89-4e2e-43cb-9cbb-427c6270e65e-kube-api-access-728n8\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.449907 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8zbms" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.471438 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.495894 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.502622 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.532659 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612338 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config\") pod \"53499ab2-7d33-4eb2-88da-fc49dc29009f\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612402 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts\") pod \"ce78868c-e61f-4d87-9e79-27b29a75644d\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612463 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlqw9\" (UniqueName: \"kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9\") pod \"81700a7f-32e9-45dd-b223-058f4340deb4\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb\") pod \"53499ab2-7d33-4eb2-88da-fc49dc29009f\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612610 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfbr\" (UniqueName: \"kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr\") pod \"53499ab2-7d33-4eb2-88da-fc49dc29009f\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts\") pod \"81700a7f-32e9-45dd-b223-058f4340deb4\" (UID: \"81700a7f-32e9-45dd-b223-058f4340deb4\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc\") pod \"53499ab2-7d33-4eb2-88da-fc49dc29009f\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts\") pod \"7cf90405-9df9-4821-831d-f6bb66f3268e\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb\") pod \"53499ab2-7d33-4eb2-88da-fc49dc29009f\" (UID: \"53499ab2-7d33-4eb2-88da-fc49dc29009f\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612756 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcjc6\" (UniqueName: \"kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6\") pod \"ce78868c-e61f-4d87-9e79-27b29a75644d\" (UID: \"ce78868c-e61f-4d87-9e79-27b29a75644d\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612818 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwsk5\" (UniqueName: \"kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5\") pod \"7cf90405-9df9-4821-831d-f6bb66f3268e\" (UID: \"7cf90405-9df9-4821-831d-f6bb66f3268e\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts\") pod \"20510388-23e4-4945-a5d5-db74a909518c\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.612952 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crsgc\" (UniqueName: \"kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc\") pod \"20510388-23e4-4945-a5d5-db74a909518c\" (UID: \"20510388-23e4-4945-a5d5-db74a909518c\") " Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.615049 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81700a7f-32e9-45dd-b223-058f4340deb4" (UID: "81700a7f-32e9-45dd-b223-058f4340deb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.617789 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc" (OuterVolumeSpecName: "kube-api-access-crsgc") pod "20510388-23e4-4945-a5d5-db74a909518c" (UID: "20510388-23e4-4945-a5d5-db74a909518c"). InnerVolumeSpecName "kube-api-access-crsgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.618052 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cf90405-9df9-4821-831d-f6bb66f3268e" (UID: "7cf90405-9df9-4821-831d-f6bb66f3268e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.618547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20510388-23e4-4945-a5d5-db74a909518c" (UID: "20510388-23e4-4945-a5d5-db74a909518c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.618844 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce78868c-e61f-4d87-9e79-27b29a75644d" (UID: "ce78868c-e61f-4d87-9e79-27b29a75644d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.619941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9" (OuterVolumeSpecName: "kube-api-access-dlqw9") pod "81700a7f-32e9-45dd-b223-058f4340deb4" (UID: "81700a7f-32e9-45dd-b223-058f4340deb4"). InnerVolumeSpecName "kube-api-access-dlqw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.622680 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6" (OuterVolumeSpecName: "kube-api-access-dcjc6") pod "ce78868c-e61f-4d87-9e79-27b29a75644d" (UID: "ce78868c-e61f-4d87-9e79-27b29a75644d"). InnerVolumeSpecName "kube-api-access-dcjc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.623474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5" (OuterVolumeSpecName: "kube-api-access-gwsk5") pod "7cf90405-9df9-4821-831d-f6bb66f3268e" (UID: "7cf90405-9df9-4821-831d-f6bb66f3268e"). InnerVolumeSpecName "kube-api-access-gwsk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.632611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr" (OuterVolumeSpecName: "kube-api-access-dwfbr") pod "53499ab2-7d33-4eb2-88da-fc49dc29009f" (UID: "53499ab2-7d33-4eb2-88da-fc49dc29009f"). InnerVolumeSpecName "kube-api-access-dwfbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.673270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config" (OuterVolumeSpecName: "config") pod "53499ab2-7d33-4eb2-88da-fc49dc29009f" (UID: "53499ab2-7d33-4eb2-88da-fc49dc29009f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.678072 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53499ab2-7d33-4eb2-88da-fc49dc29009f" (UID: "53499ab2-7d33-4eb2-88da-fc49dc29009f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.680331 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53499ab2-7d33-4eb2-88da-fc49dc29009f" (UID: "53499ab2-7d33-4eb2-88da-fc49dc29009f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.683828 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53499ab2-7d33-4eb2-88da-fc49dc29009f" (UID: "53499ab2-7d33-4eb2-88da-fc49dc29009f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.693006 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hlq6m" event={"ID":"81700a7f-32e9-45dd-b223-058f4340deb4","Type":"ContainerDied","Data":"59aab6ef1fec2140596e1581f80755629a6383cce337d2a6e0d4daa685ed5f05"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.693057 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59aab6ef1fec2140596e1581f80755629a6383cce337d2a6e0d4daa685ed5f05" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.693130 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hlq6m" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.695334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vc9jp" event={"ID":"ce78868c-e61f-4d87-9e79-27b29a75644d","Type":"ContainerDied","Data":"a15e431030251298249f503402fd8717ef7c6520d6cf8aea4558e4b71b6b4ea9"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.695479 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a15e431030251298249f503402fd8717ef7c6520d6cf8aea4558e4b71b6b4ea9" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.695554 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vc9jp" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.701274 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-zjmd9" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.701559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d3fe-account-create-update-zjmd9" event={"ID":"7cf90405-9df9-4821-831d-f6bb66f3268e","Type":"ContainerDied","Data":"73574ade2772b7619fb733b5fe67355905dccf5de22c11eb07081ee97ac37650"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.701618 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73574ade2772b7619fb733b5fe67355905dccf5de22c11eb07081ee97ac37650" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.702976 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8zbms" event={"ID":"20510388-23e4-4945-a5d5-db74a909518c","Type":"ContainerDied","Data":"dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.703017 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc42713a41213f099e98f285de543ecdef8d9d1cfd6b2e17fedd10e0a05f9324" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.702985 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8zbms" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714526 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwsk5\" (UniqueName: \"kubernetes.io/projected/7cf90405-9df9-4821-831d-f6bb66f3268e-kube-api-access-gwsk5\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714582 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510388-23e4-4945-a5d5-db74a909518c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714595 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crsgc\" (UniqueName: \"kubernetes.io/projected/20510388-23e4-4945-a5d5-db74a909518c-kube-api-access-crsgc\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714611 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714623 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78868c-e61f-4d87-9e79-27b29a75644d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714634 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlqw9\" (UniqueName: \"kubernetes.io/projected/81700a7f-32e9-45dd-b223-058f4340deb4-kube-api-access-dlqw9\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714646 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714676 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfbr\" (UniqueName: \"kubernetes.io/projected/53499ab2-7d33-4eb2-88da-fc49dc29009f-kube-api-access-dwfbr\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714688 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81700a7f-32e9-45dd-b223-058f4340deb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714700 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714710 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf90405-9df9-4821-831d-f6bb66f3268e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714719 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53499ab2-7d33-4eb2-88da-fc49dc29009f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.714728 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcjc6\" (UniqueName: \"kubernetes.io/projected/ce78868c-e61f-4d87-9e79-27b29a75644d-kube-api-access-dcjc6\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.723153 4766 generic.go:334] "Generic (PLEG): container finished" podID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerID="1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5" exitCode=0 Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.723236 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bh8d5" event={"ID":"53499ab2-7d33-4eb2-88da-fc49dc29009f","Type":"ContainerDied","Data":"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.723271 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bh8d5" event={"ID":"53499ab2-7d33-4eb2-88da-fc49dc29009f","Type":"ContainerDied","Data":"9b0c5cdc295abc498938c1ed4d9ce2bb72b47dd7fbc7ca82ebc0cac3c60d3e21"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.723387 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bh8d5" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.723292 4766 scope.go:117] "RemoveContainer" containerID="1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.733357 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-ndzx8" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.734037 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb14-account-create-update-ndzx8" event={"ID":"bf714f89-4e2e-43cb-9cbb-427c6270e65e","Type":"ContainerDied","Data":"be30adae9f6d11531c6bebedda63242049e276b08d51bb5f9f4e45f72bb024c3"} Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.734080 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be30adae9f6d11531c6bebedda63242049e276b08d51bb5f9f4e45f72bb024c3" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.760376 4766 scope.go:117] "RemoveContainer" containerID="f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.767588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.775758 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bh8d5"] Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.797197 4766 scope.go:117] "RemoveContainer" containerID="1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5" Jan 29 11:45:18 crc kubenswrapper[4766]: E0129 11:45:18.797748 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5\": container with ID starting with 1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5 not found: ID does not exist" containerID="1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.797796 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5"} err="failed to get container status \"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5\": rpc error: code = NotFound desc = could not find container \"1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5\": container with ID starting with 1aa5c5556afd2aa58d9cdfcd1d516e610002c2c03f8bb37e88eccd96a502dfc5 not found: ID does not exist" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.797830 4766 scope.go:117] "RemoveContainer" containerID="f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64" Jan 29 11:45:18 crc kubenswrapper[4766]: E0129 11:45:18.798463 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64\": container with ID starting with f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64 not found: ID does not exist" containerID="f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.798554 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64"} err="failed to get container status \"f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64\": rpc error: code = NotFound desc = could not find container \"f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64\": container with ID starting with f39ad743cd1d52de22fe04c6c34b0317eb9e3e1b12ad8d3ef10405eb433afa64 not found: ID does not exist" Jan 29 11:45:18 crc kubenswrapper[4766]: I0129 11:45:18.998734 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.124545 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vmr\" (UniqueName: \"kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr\") pod \"f5dc4d47-9e72-4287-be4f-176017f5c41a\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.124675 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts\") pod \"f5dc4d47-9e72-4287-be4f-176017f5c41a\" (UID: \"f5dc4d47-9e72-4287-be4f-176017f5c41a\") " Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.125576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5dc4d47-9e72-4287-be4f-176017f5c41a" (UID: "f5dc4d47-9e72-4287-be4f-176017f5c41a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.130945 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr" (OuterVolumeSpecName: "kube-api-access-g9vmr") pod "f5dc4d47-9e72-4287-be4f-176017f5c41a" (UID: "f5dc4d47-9e72-4287-be4f-176017f5c41a"). InnerVolumeSpecName "kube-api-access-g9vmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.225999 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vmr\" (UniqueName: \"kubernetes.io/projected/f5dc4d47-9e72-4287-be4f-176017f5c41a-kube-api-access-g9vmr\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.226045 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dc4d47-9e72-4287-be4f-176017f5c41a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.233500 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" path="/var/lib/kubelet/pods/53499ab2-7d33-4eb2-88da-fc49dc29009f/volumes" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.741903 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-p9lr6" event={"ID":"f5dc4d47-9e72-4287-be4f-176017f5c41a","Type":"ContainerDied","Data":"bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971"} Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.742248 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac2b63e5ed7d71c55a8074908e8c48842aa2d59c504549b5edfb4b215257971" Jan 29 11:45:19 crc kubenswrapper[4766]: I0129 11:45:19.741991 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-p9lr6" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395119 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mhmgq"] Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395526 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf714f89-4e2e-43cb-9cbb-427c6270e65e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395548 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf714f89-4e2e-43cb-9cbb-427c6270e65e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395571 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81700a7f-32e9-45dd-b223-058f4340deb4" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395580 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="81700a7f-32e9-45dd-b223-058f4340deb4" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395596 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20510388-23e4-4945-a5d5-db74a909518c" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395606 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20510388-23e4-4945-a5d5-db74a909518c" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395615 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="dnsmasq-dns" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395622 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="dnsmasq-dns" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395642 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc4d47-9e72-4287-be4f-176017f5c41a" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395650 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc4d47-9e72-4287-be4f-176017f5c41a" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395660 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf90405-9df9-4821-831d-f6bb66f3268e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395667 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf90405-9df9-4821-831d-f6bb66f3268e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce78868c-e61f-4d87-9e79-27b29a75644d" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395684 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce78868c-e61f-4d87-9e79-27b29a75644d" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: E0129 11:45:20.395699 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="init" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395706 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="init" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395895 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="81700a7f-32e9-45dd-b223-058f4340deb4" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395908 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce78868c-e61f-4d87-9e79-27b29a75644d" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395919 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="dnsmasq-dns" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395930 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf90405-9df9-4821-831d-f6bb66f3268e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395947 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf714f89-4e2e-43cb-9cbb-427c6270e65e" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395958 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dc4d47-9e72-4287-be4f-176017f5c41a" containerName="mariadb-account-create-update" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.395970 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20510388-23e4-4945-a5d5-db74a909518c" containerName="mariadb-database-create" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.396510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.399474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z2vhg" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.403276 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.407954 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mhmgq"] Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.547033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.547117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.547179 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.547307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68984\" (UniqueName: \"kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.649491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.649599 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.649627 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.649668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68984\" (UniqueName: \"kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.654379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.654624 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.654666 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.670496 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68984\" (UniqueName: \"kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984\") pod \"glance-db-sync-mhmgq\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.714486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.753473 4766 generic.go:334] "Generic (PLEG): container finished" podID="628d9a82-bc49-44b5-a259-9d7f39bcb803" containerID="7e858737daba72926bb1c1a68da1eac711ef60cc06bf99c8cbce6410dc3a5bde" exitCode=0 Jan 29 11:45:20 crc kubenswrapper[4766]: I0129 11:45:20.753515 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-skcrx" event={"ID":"628d9a82-bc49-44b5-a259-9d7f39bcb803","Type":"ContainerDied","Data":"7e858737daba72926bb1c1a68da1eac711ef60cc06bf99c8cbce6410dc3a5bde"} Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.160783 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.253186 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mhmgq"] Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.659120 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-twxsv"] Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.660596 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.662955 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.692740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-twxsv"] Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.762271 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mhmgq" event={"ID":"7b251ce0-eaf1-43fc-97a0-e59a8b829b28","Type":"ContainerStarted","Data":"69b5a683936345c2a9a0deb0e6f258ef0bd95fc61640ce7f6ccfb8a9e06ea063"} Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.768726 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zh9m\" (UniqueName: \"kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.768855 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.870078 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zh9m\" (UniqueName: \"kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.870153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.871136 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:21 crc kubenswrapper[4766]: I0129 11:45:21.887989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zh9m\" (UniqueName: \"kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m\") pod \"root-account-create-update-twxsv\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.004305 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.097060 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.277798 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.277862 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.277903 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvdj6\" (UniqueName: \"kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.277936 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.277965 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.278000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.278047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts\") pod \"628d9a82-bc49-44b5-a259-9d7f39bcb803\" (UID: \"628d9a82-bc49-44b5-a259-9d7f39bcb803\") " Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.279329 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.279562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.293823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6" (OuterVolumeSpecName: "kube-api-access-cvdj6") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "kube-api-access-cvdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.295745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.299077 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.301921 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.307289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts" (OuterVolumeSpecName: "scripts") pod "628d9a82-bc49-44b5-a259-9d7f39bcb803" (UID: "628d9a82-bc49-44b5-a259-9d7f39bcb803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380113 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/628d9a82-bc49-44b5-a259-9d7f39bcb803-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380159 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380175 4766 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380188 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/628d9a82-bc49-44b5-a259-9d7f39bcb803-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380199 4766 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380211 4766 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/628d9a82-bc49-44b5-a259-9d7f39bcb803-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.380223 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvdj6\" (UniqueName: \"kubernetes.io/projected/628d9a82-bc49-44b5-a259-9d7f39bcb803-kube-api-access-cvdj6\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.449965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-twxsv"] Jan 29 11:45:22 crc kubenswrapper[4766]: W0129 11:45:22.470487 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0488f59_d48c_45f2_896e_562e7deb5545.slice/crio-d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057 WatchSource:0}: Error finding container d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057: Status 404 returned error can't find the container with id d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057 Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.772378 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-twxsv" event={"ID":"f0488f59-d48c-45f2-896e-562e7deb5545","Type":"ContainerStarted","Data":"6888ef0b10075315f9d9a04b4f6de56e997d905248e9aa9f273d9e651d9ed15e"} Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.772450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-twxsv" event={"ID":"f0488f59-d48c-45f2-896e-562e7deb5545","Type":"ContainerStarted","Data":"d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057"} Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.777377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-skcrx" event={"ID":"628d9a82-bc49-44b5-a259-9d7f39bcb803","Type":"ContainerDied","Data":"66abf514f6b2939250194ca511b892a1b0ccbfd717d7a1ab17ce00489511ab17"} Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.777472 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66abf514f6b2939250194ca511b892a1b0ccbfd717d7a1ab17ce00489511ab17" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.777425 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skcrx" Jan 29 11:45:22 crc kubenswrapper[4766]: I0129 11:45:22.793334 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-twxsv" podStartSLOduration=1.793315533 podStartE2EDuration="1.793315533s" podCreationTimestamp="2026-01-29 11:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:22.788272493 +0000 UTC m=+1459.900665504" watchObservedRunningTime="2026-01-29 11:45:22.793315533 +0000 UTC m=+1459.905708534" Jan 29 11:45:23 crc kubenswrapper[4766]: I0129 11:45:23.169713 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bh8d5" podUID="53499ab2-7d33-4eb2-88da-fc49dc29009f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Jan 29 11:45:23 crc kubenswrapper[4766]: I0129 11:45:23.790658 4766 generic.go:334] "Generic (PLEG): container finished" podID="f0488f59-d48c-45f2-896e-562e7deb5545" containerID="6888ef0b10075315f9d9a04b4f6de56e997d905248e9aa9f273d9e651d9ed15e" exitCode=0 Jan 29 11:45:23 crc kubenswrapper[4766]: I0129 11:45:23.790740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-twxsv" event={"ID":"f0488f59-d48c-45f2-896e-562e7deb5545","Type":"ContainerDied","Data":"6888ef0b10075315f9d9a04b4f6de56e997d905248e9aa9f273d9e651d9ed15e"} Jan 29 11:45:24 crc kubenswrapper[4766]: I0129 11:45:24.114904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:24 crc kubenswrapper[4766]: I0129 11:45:24.126097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"swift-storage-0\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " pod="openstack/swift-storage-0" Jan 29 11:45:24 crc kubenswrapper[4766]: I0129 11:45:24.256470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:45:24 crc kubenswrapper[4766]: I0129 11:45:24.778114 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:45:24 crc kubenswrapper[4766]: W0129 11:45:24.785261 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc299dfaa_12db_4482_ab89_55ba85b8e2a7.slice/crio-00d2dddd84ce0b74b92d4be4bc9599cce85a26c3c8910d5387fb145c688129de WatchSource:0}: Error finding container 00d2dddd84ce0b74b92d4be4bc9599cce85a26c3c8910d5387fb145c688129de: Status 404 returned error can't find the container with id 00d2dddd84ce0b74b92d4be4bc9599cce85a26c3c8910d5387fb145c688129de Jan 29 11:45:24 crc kubenswrapper[4766]: I0129 11:45:24.806605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"00d2dddd84ce0b74b92d4be4bc9599cce85a26c3c8910d5387fb145c688129de"} Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.162473 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.337809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zh9m\" (UniqueName: \"kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m\") pod \"f0488f59-d48c-45f2-896e-562e7deb5545\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.339218 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts\") pod \"f0488f59-d48c-45f2-896e-562e7deb5545\" (UID: \"f0488f59-d48c-45f2-896e-562e7deb5545\") " Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.339775 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0488f59-d48c-45f2-896e-562e7deb5545" (UID: "f0488f59-d48c-45f2-896e-562e7deb5545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.340203 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0488f59-d48c-45f2-896e-562e7deb5545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.344877 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m" (OuterVolumeSpecName: "kube-api-access-2zh9m") pod "f0488f59-d48c-45f2-896e-562e7deb5545" (UID: "f0488f59-d48c-45f2-896e-562e7deb5545"). InnerVolumeSpecName "kube-api-access-2zh9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.441375 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zh9m\" (UniqueName: \"kubernetes.io/projected/f0488f59-d48c-45f2-896e-562e7deb5545-kube-api-access-2zh9m\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.816781 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-twxsv" event={"ID":"f0488f59-d48c-45f2-896e-562e7deb5545","Type":"ContainerDied","Data":"d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057"} Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.816822 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6326b4789feff1b52f30afd059cd7c4f5481e3cf2c2f54c0cb3cc31e898b057" Jan 29 11:45:25 crc kubenswrapper[4766]: I0129 11:45:25.816881 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-twxsv" Jan 29 11:45:26 crc kubenswrapper[4766]: I0129 11:45:26.205772 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5kz4c" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:45:26 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:45:26 crc kubenswrapper[4766]: > Jan 29 11:45:26 crc kubenswrapper[4766]: I0129 11:45:26.304692 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:45:26 crc kubenswrapper[4766]: I0129 11:45:26.827571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6"} Jan 29 11:45:27 crc kubenswrapper[4766]: I0129 11:45:27.839215 4766 generic.go:334] "Generic (PLEG): container finished" podID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerID="35d741477652fd2fdab85e5a190f27cf16637cca6d3186932dfe4f9ff8c8c1c1" exitCode=0 Jan 29 11:45:27 crc kubenswrapper[4766]: I0129 11:45:27.839256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerDied","Data":"35d741477652fd2fdab85e5a190f27cf16637cca6d3186932dfe4f9ff8c8c1c1"} Jan 29 11:45:28 crc kubenswrapper[4766]: I0129 11:45:28.115031 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-twxsv"] Jan 29 11:45:28 crc kubenswrapper[4766]: I0129 11:45:28.121605 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-twxsv"] Jan 29 11:45:28 crc kubenswrapper[4766]: I0129 11:45:28.848169 4766 generic.go:334] "Generic (PLEG): container finished" podID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerID="a7bd65c4cb6402ca31a9d412ea5ab09924e3681dbdd63afcca07deade4b71a0b" exitCode=0 Jan 29 11:45:28 crc kubenswrapper[4766]: I0129 11:45:28.848216 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerDied","Data":"a7bd65c4cb6402ca31a9d412ea5ab09924e3681dbdd63afcca07deade4b71a0b"} Jan 29 11:45:29 crc kubenswrapper[4766]: I0129 11:45:29.234576 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0488f59-d48c-45f2-896e-562e7deb5545" path="/var/lib/kubelet/pods/f0488f59-d48c-45f2-896e-562e7deb5545/volumes" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.206864 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5kz4c" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:45:31 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:45:31 crc kubenswrapper[4766]: > Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.341702 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.549641 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5kz4c-config-bgq46"] Jan 29 11:45:31 crc kubenswrapper[4766]: E0129 11:45:31.550084 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628d9a82-bc49-44b5-a259-9d7f39bcb803" containerName="swift-ring-rebalance" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.550110 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="628d9a82-bc49-44b5-a259-9d7f39bcb803" containerName="swift-ring-rebalance" Jan 29 11:45:31 crc kubenswrapper[4766]: E0129 11:45:31.550124 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0488f59-d48c-45f2-896e-562e7deb5545" containerName="mariadb-account-create-update" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.550134 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0488f59-d48c-45f2-896e-562e7deb5545" containerName="mariadb-account-create-update" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.550710 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0488f59-d48c-45f2-896e-562e7deb5545" containerName="mariadb-account-create-update" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.550739 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="628d9a82-bc49-44b5-a259-9d7f39bcb803" containerName="swift-ring-rebalance" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.551481 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.554333 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.575404 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kz4c-config-bgq46"] Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.650881 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.650972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.651005 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.651396 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scpf\" (UniqueName: \"kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.651507 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.651574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.753895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scpf\" (UniqueName: \"kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.753966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754013 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754092 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.754478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.755164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.756636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.776292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scpf\" (UniqueName: \"kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf\") pod \"ovn-controller-5kz4c-config-bgq46\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:31 crc kubenswrapper[4766]: I0129 11:45:31.883701 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.786824 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kz4c-config-bgq46"] Jan 29 11:45:32 crc kubenswrapper[4766]: W0129 11:45:32.798753 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e328ed_d3d9_403f_b6ed_caa0e3e570f2.slice/crio-b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0 WatchSource:0}: Error finding container b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0: Status 404 returned error can't find the container with id b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0 Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.893599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerStarted","Data":"07c7e43f4c233bc15a95251cad07a884a33a05f78743bb5a3c6f01f63b880784"} Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.894002 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.903248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"aff768bf5b19009768658ec1f0fc18767e8949cd575199e18d90c8f182040d28"} Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.903297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"7f8c5aeba92943edcfc2aff61715cdbbc5630ac266d0729c5b84d3f25837100d"} Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.907581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c-config-bgq46" event={"ID":"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2","Type":"ContainerStarted","Data":"b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0"} Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.914227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerStarted","Data":"81f89abef5c9ff0ed76588cc8797d021673aa15a99156bcbfe83b47af9618c73"} Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.916035 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.930854 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.852250662 podStartE2EDuration="1m2.930835882s" podCreationTimestamp="2026-01-29 11:44:30 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.715003289 +0000 UTC m=+1421.827396300" lastFinishedPulling="2026-01-29 11:44:52.793588509 +0000 UTC m=+1429.905981520" observedRunningTime="2026-01-29 11:45:32.927273393 +0000 UTC m=+1470.039666434" watchObservedRunningTime="2026-01-29 11:45:32.930835882 +0000 UTC m=+1470.043228893" Jan 29 11:45:32 crc kubenswrapper[4766]: I0129 11:45:32.965792 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.62946404 podStartE2EDuration="1m2.965769841s" podCreationTimestamp="2026-01-29 11:44:30 +0000 UTC" firstStartedPulling="2026-01-29 11:44:44.129278015 +0000 UTC m=+1421.241671026" lastFinishedPulling="2026-01-29 11:44:53.465583816 +0000 UTC m=+1430.577976827" observedRunningTime="2026-01-29 11:45:32.959327772 +0000 UTC m=+1470.071720783" watchObservedRunningTime="2026-01-29 11:45:32.965769841 +0000 UTC m=+1470.078162862" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.125446 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dwz6q"] Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.126743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.129250 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.134388 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dwz6q"] Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.275913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.276162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbfk\" (UniqueName: \"kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.378042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmbfk\" (UniqueName: \"kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.378253 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.378990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.394588 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmbfk\" (UniqueName: \"kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk\") pod \"root-account-create-update-dwz6q\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.451796 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.858776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dwz6q"] Jan 29 11:45:33 crc kubenswrapper[4766]: W0129 11:45:33.867671 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee506bd6_9b63_4ad3_8499_3802ab144d3e.slice/crio-d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340 WatchSource:0}: Error finding container d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340: Status 404 returned error can't find the container with id d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340 Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.928987 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"0025dd537da59d77d5c32f5643222b1c209187a4cb4389da45a65ec542521294"} Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.930686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mhmgq" event={"ID":"7b251ce0-eaf1-43fc-97a0-e59a8b829b28","Type":"ContainerStarted","Data":"ac77ae0dc0937af8cf11d98edc90f066da8b3ffdaa95852c341e7662f0d23df6"} Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.933773 4766 generic.go:334] "Generic (PLEG): container finished" podID="e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" containerID="ca2bd9e6324bd1d1a4140bf7f5d26c398cd9fce6da66d4744127ae8f1a2b1c16" exitCode=0 Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.933838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c-config-bgq46" event={"ID":"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2","Type":"ContainerDied","Data":"ca2bd9e6324bd1d1a4140bf7f5d26c398cd9fce6da66d4744127ae8f1a2b1c16"} Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.935278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwz6q" event={"ID":"ee506bd6-9b63-4ad3-8499-3802ab144d3e","Type":"ContainerStarted","Data":"d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340"} Jan 29 11:45:33 crc kubenswrapper[4766]: I0129 11:45:33.970098 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mhmgq" podStartSLOduration=2.810045586 podStartE2EDuration="13.970081014s" podCreationTimestamp="2026-01-29 11:45:20 +0000 UTC" firstStartedPulling="2026-01-29 11:45:21.261316094 +0000 UTC m=+1458.373709105" lastFinishedPulling="2026-01-29 11:45:32.421351522 +0000 UTC m=+1469.533744533" observedRunningTime="2026-01-29 11:45:33.948546376 +0000 UTC m=+1471.060939387" watchObservedRunningTime="2026-01-29 11:45:33.970081014 +0000 UTC m=+1471.082474025" Jan 29 11:45:34 crc kubenswrapper[4766]: I0129 11:45:34.949228 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"30f33e794206b04a93fc0f4e715cfe43660a23a19676c6e5b3df502d2e869f1b"} Jan 29 11:45:34 crc kubenswrapper[4766]: I0129 11:45:34.949789 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"e0305b2958f6c65d81b49c58ff14fade2e99341839d85bcc73aa51a8cd5a3041"} Jan 29 11:45:34 crc kubenswrapper[4766]: I0129 11:45:34.949807 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"7c33d37f74f55ffa51cd765a4b94d2af021150d55ef7e15a523b325c621e7d0a"} Jan 29 11:45:34 crc kubenswrapper[4766]: I0129 11:45:34.952602 4766 generic.go:334] "Generic (PLEG): container finished" podID="ee506bd6-9b63-4ad3-8499-3802ab144d3e" containerID="1ae6efb8d7fd239f4a0fa84c1d56ee76ea87f272277e26ad7b41583d5980455a" exitCode=0 Jan 29 11:45:34 crc kubenswrapper[4766]: I0129 11:45:34.952665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwz6q" event={"ID":"ee506bd6-9b63-4ad3-8499-3802ab144d3e","Type":"ContainerDied","Data":"1ae6efb8d7fd239f4a0fa84c1d56ee76ea87f272277e26ad7b41583d5980455a"} Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.308362 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420372 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4scpf\" (UniqueName: \"kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420556 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420621 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420641 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.420676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn\") pod \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\" (UID: \"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2\") " Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.421041 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.421100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run" (OuterVolumeSpecName: "var-run") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.422004 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.422337 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.422370 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts" (OuterVolumeSpecName: "scripts") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.427164 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf" (OuterVolumeSpecName: "kube-api-access-4scpf") pod "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" (UID: "e0e328ed-d3d9-403f-b6ed-caa0e3e570f2"). InnerVolumeSpecName "kube-api-access-4scpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522669 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522716 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522726 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522738 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522746 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.522755 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4scpf\" (UniqueName: \"kubernetes.io/projected/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2-kube-api-access-4scpf\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.964353 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"8d94f2b31596b3ca99397133e0199e33b8ac9312697c345fc4b87be8aeecd36f"} Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.966210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c-config-bgq46" event={"ID":"e0e328ed-d3d9-403f-b6ed-caa0e3e570f2","Type":"ContainerDied","Data":"b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0"} Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.966232 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1271f61d35721a7a76c96fdf8bb27ebd7261d97f07a491b573ad94b209268e0" Jan 29 11:45:35 crc kubenswrapper[4766]: I0129 11:45:35.966344 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c-config-bgq46" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.216187 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5kz4c" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.351902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.421487 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5kz4c-config-bgq46"] Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.436147 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5kz4c-config-bgq46"] Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.444153 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts\") pod \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.444279 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmbfk\" (UniqueName: \"kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk\") pod \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\" (UID: \"ee506bd6-9b63-4ad3-8499-3802ab144d3e\") " Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.445292 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee506bd6-9b63-4ad3-8499-3802ab144d3e" (UID: "ee506bd6-9b63-4ad3-8499-3802ab144d3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.452661 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk" (OuterVolumeSpecName: "kube-api-access-qmbfk") pod "ee506bd6-9b63-4ad3-8499-3802ab144d3e" (UID: "ee506bd6-9b63-4ad3-8499-3802ab144d3e"). InnerVolumeSpecName "kube-api-access-qmbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.546624 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506bd6-9b63-4ad3-8499-3802ab144d3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.546857 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmbfk\" (UniqueName: \"kubernetes.io/projected/ee506bd6-9b63-4ad3-8499-3802ab144d3e-kube-api-access-qmbfk\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.975460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwz6q" event={"ID":"ee506bd6-9b63-4ad3-8499-3802ab144d3e","Type":"ContainerDied","Data":"d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340"} Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.975875 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d790312033ea6bc7bab4b552288e5f54d8943facd3d70a9d2016501679482340" Jan 29 11:45:36 crc kubenswrapper[4766]: I0129 11:45:36.975499 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwz6q" Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.236782 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" path="/var/lib/kubelet/pods/e0e328ed-d3d9-403f-b6ed-caa0e3e570f2/volumes" Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"b5a310208e51de3a1f1085a299d696e0c092c1ac6a305a7368d95a466bfff254"} Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988420 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"2395dfbbbded053ffa0416aaf69a1b9af00ea806ccc677235dd81f9d3e9af4d0"} Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"f4a7df4ad8946a4ec821983033924fd3dd8e163b9568817e4bde1fb325d0beeb"} Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988445 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"c34078361e9f1ca8e71c227ebd7d7091b558e6c3354bb51e22b1e1374342fcd1"} Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988456 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"257558dd443e4fbb0f93499c81b54107c340b1424e2baeb386f3a283efa8bdc7"} Jan 29 11:45:37 crc kubenswrapper[4766]: I0129 11:45:37.988465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"d7be2c0fabfadf12060358b5738adc72343b29f57c77135d1af1a5ae1e4e2863"} Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.002435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerStarted","Data":"8e68677dd185d8414adc8711bc359046fd3ba61c227101b176907d577a947636"} Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.044363 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.0202686 podStartE2EDuration="32.044344732s" podCreationTimestamp="2026-01-29 11:45:07 +0000 UTC" firstStartedPulling="2026-01-29 11:45:24.787640762 +0000 UTC m=+1461.900033773" lastFinishedPulling="2026-01-29 11:45:36.811716894 +0000 UTC m=+1473.924109905" observedRunningTime="2026-01-29 11:45:39.037848372 +0000 UTC m=+1476.150241403" watchObservedRunningTime="2026-01-29 11:45:39.044344732 +0000 UTC m=+1476.156737743" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.358301 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:39 crc kubenswrapper[4766]: E0129 11:45:39.358784 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" containerName="ovn-config" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.358810 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" containerName="ovn-config" Jan 29 11:45:39 crc kubenswrapper[4766]: E0129 11:45:39.358846 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee506bd6-9b63-4ad3-8499-3802ab144d3e" containerName="mariadb-account-create-update" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.358855 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee506bd6-9b63-4ad3-8499-3802ab144d3e" containerName="mariadb-account-create-update" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.359050 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e328ed-d3d9-403f-b6ed-caa0e3e570f2" containerName="ovn-config" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.359071 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee506bd6-9b63-4ad3-8499-3802ab144d3e" containerName="mariadb-account-create-update" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.359954 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.363324 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.368854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.493730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.493826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.493873 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqd7\" (UniqueName: \"kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.493961 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.493995 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.494120 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqd7\" (UniqueName: \"kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595492 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595657 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.595711 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.596860 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.596946 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.597440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.597724 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.597739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.615791 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqd7\" (UniqueName: \"kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7\") pod \"dnsmasq-dns-5c79d794d7-qbqw9\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:39 crc kubenswrapper[4766]: I0129 11:45:39.686107 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:40 crc kubenswrapper[4766]: W0129 11:45:40.145069 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94583424_8f90_4738_a897_bad7f17a276d.slice/crio-faeb0ad20fd5371896364f6d5ea93bdb1c2b38d5264b1e1c4b020f4670c80472 WatchSource:0}: Error finding container faeb0ad20fd5371896364f6d5ea93bdb1c2b38d5264b1e1c4b020f4670c80472: Status 404 returned error can't find the container with id faeb0ad20fd5371896364f6d5ea93bdb1c2b38d5264b1e1c4b020f4670c80472 Jan 29 11:45:40 crc kubenswrapper[4766]: I0129 11:45:40.148166 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:41 crc kubenswrapper[4766]: I0129 11:45:41.020131 4766 generic.go:334] "Generic (PLEG): container finished" podID="7b251ce0-eaf1-43fc-97a0-e59a8b829b28" containerID="ac77ae0dc0937af8cf11d98edc90f066da8b3ffdaa95852c341e7662f0d23df6" exitCode=0 Jan 29 11:45:41 crc kubenswrapper[4766]: I0129 11:45:41.020471 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mhmgq" event={"ID":"7b251ce0-eaf1-43fc-97a0-e59a8b829b28","Type":"ContainerDied","Data":"ac77ae0dc0937af8cf11d98edc90f066da8b3ffdaa95852c341e7662f0d23df6"} Jan 29 11:45:41 crc kubenswrapper[4766]: I0129 11:45:41.022364 4766 generic.go:334] "Generic (PLEG): container finished" podID="94583424-8f90-4738-a897-bad7f17a276d" containerID="44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88" exitCode=0 Jan 29 11:45:41 crc kubenswrapper[4766]: I0129 11:45:41.022397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" event={"ID":"94583424-8f90-4738-a897-bad7f17a276d","Type":"ContainerDied","Data":"44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88"} Jan 29 11:45:41 crc kubenswrapper[4766]: I0129 11:45:41.022432 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" event={"ID":"94583424-8f90-4738-a897-bad7f17a276d","Type":"ContainerStarted","Data":"faeb0ad20fd5371896364f6d5ea93bdb1c2b38d5264b1e1c4b020f4670c80472"} Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.031251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" event={"ID":"94583424-8f90-4738-a897-bad7f17a276d","Type":"ContainerStarted","Data":"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a"} Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.051868 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" podStartSLOduration=3.051850621 podStartE2EDuration="3.051850621s" podCreationTimestamp="2026-01-29 11:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:42.051146332 +0000 UTC m=+1479.163539343" watchObservedRunningTime="2026-01-29 11:45:42.051850621 +0000 UTC m=+1479.164243632" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.672628 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.783483 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle\") pod \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.783549 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68984\" (UniqueName: \"kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984\") pod \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.783666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data\") pod \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.783693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data\") pod \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\" (UID: \"7b251ce0-eaf1-43fc-97a0-e59a8b829b28\") " Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.789546 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7b251ce0-eaf1-43fc-97a0-e59a8b829b28" (UID: "7b251ce0-eaf1-43fc-97a0-e59a8b829b28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.795774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984" (OuterVolumeSpecName: "kube-api-access-68984") pod "7b251ce0-eaf1-43fc-97a0-e59a8b829b28" (UID: "7b251ce0-eaf1-43fc-97a0-e59a8b829b28"). InnerVolumeSpecName "kube-api-access-68984". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.814528 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b251ce0-eaf1-43fc-97a0-e59a8b829b28" (UID: "7b251ce0-eaf1-43fc-97a0-e59a8b829b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.838307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data" (OuterVolumeSpecName: "config-data") pod "7b251ce0-eaf1-43fc-97a0-e59a8b829b28" (UID: "7b251ce0-eaf1-43fc-97a0-e59a8b829b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.885831 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.885860 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.885869 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:42 crc kubenswrapper[4766]: I0129 11:45:42.885878 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68984\" (UniqueName: \"kubernetes.io/projected/7b251ce0-eaf1-43fc-97a0-e59a8b829b28-kube-api-access-68984\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.041812 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mhmgq" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.041786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mhmgq" event={"ID":"7b251ce0-eaf1-43fc-97a0-e59a8b829b28","Type":"ContainerDied","Data":"69b5a683936345c2a9a0deb0e6f258ef0bd95fc61640ce7f6ccfb8a9e06ea063"} Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.041867 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b5a683936345c2a9a0deb0e6f258ef0bd95fc61640ce7f6ccfb8a9e06ea063" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.041929 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.432383 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.461171 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:45:43 crc kubenswrapper[4766]: E0129 11:45:43.461540 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b251ce0-eaf1-43fc-97a0-e59a8b829b28" containerName="glance-db-sync" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.461556 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b251ce0-eaf1-43fc-97a0-e59a8b829b28" containerName="glance-db-sync" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.461751 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b251ce0-eaf1-43fc-97a0-e59a8b829b28" containerName="glance-db-sync" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.462572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.489401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496464 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496586 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496635 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv44r\" (UniqueName: \"kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.496729 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.598876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.598952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.598975 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.599024 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.599108 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv44r\" (UniqueName: \"kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.599130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.600201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.600225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.600385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.600467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.600756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.626529 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv44r\" (UniqueName: \"kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r\") pod \"dnsmasq-dns-5f59b8f679-h59dv\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:43 crc kubenswrapper[4766]: I0129 11:45:43.788622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:44 crc kubenswrapper[4766]: I0129 11:45:44.237867 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:45:44 crc kubenswrapper[4766]: W0129 11:45:44.239002 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda092d08_0c97_45e3_8d8a_162c6a00d827.slice/crio-84608bdf814f419748601d04f27acd318bfc5d8fe9b9f669e815f527d99e2612 WatchSource:0}: Error finding container 84608bdf814f419748601d04f27acd318bfc5d8fe9b9f669e815f527d99e2612: Status 404 returned error can't find the container with id 84608bdf814f419748601d04f27acd318bfc5d8fe9b9f669e815f527d99e2612 Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.059391 4766 generic.go:334] "Generic (PLEG): container finished" podID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerID="e66e30a3da739d10e6e576c8ae714d072ec22bacc4d2125da012135b0bb6f3b2" exitCode=0 Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.059522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" event={"ID":"da092d08-0c97-45e3-8d8a-162c6a00d827","Type":"ContainerDied","Data":"e66e30a3da739d10e6e576c8ae714d072ec22bacc4d2125da012135b0bb6f3b2"} Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.059571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" event={"ID":"da092d08-0c97-45e3-8d8a-162c6a00d827","Type":"ContainerStarted","Data":"84608bdf814f419748601d04f27acd318bfc5d8fe9b9f669e815f527d99e2612"} Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.059650 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="dnsmasq-dns" containerID="cri-o://00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a" gracePeriod=10 Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.482625 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.632618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.632751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.632787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.632823 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqd7\" (UniqueName: \"kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.633440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.633470 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc\") pod \"94583424-8f90-4738-a897-bad7f17a276d\" (UID: \"94583424-8f90-4738-a897-bad7f17a276d\") " Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.640713 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7" (OuterVolumeSpecName: "kube-api-access-fzqd7") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "kube-api-access-fzqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.677381 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config" (OuterVolumeSpecName: "config") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.684001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.685426 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.688938 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.695473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94583424-8f90-4738-a897-bad7f17a276d" (UID: "94583424-8f90-4738-a897-bad7f17a276d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734788 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734822 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734835 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734845 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqd7\" (UniqueName: \"kubernetes.io/projected/94583424-8f90-4738-a897-bad7f17a276d-kube-api-access-fzqd7\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734854 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:45 crc kubenswrapper[4766]: I0129 11:45:45.734867 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94583424-8f90-4738-a897-bad7f17a276d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.072896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" event={"ID":"da092d08-0c97-45e3-8d8a-162c6a00d827","Type":"ContainerStarted","Data":"b701ef44e347feb43c14cfc6e87ee771ec0e9ba2936937403c6f2c8f306e1c2c"} Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.073584 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.076089 4766 generic.go:334] "Generic (PLEG): container finished" podID="94583424-8f90-4738-a897-bad7f17a276d" containerID="00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a" exitCode=0 Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.076150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" event={"ID":"94583424-8f90-4738-a897-bad7f17a276d","Type":"ContainerDied","Data":"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a"} Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.076188 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" event={"ID":"94583424-8f90-4738-a897-bad7f17a276d","Type":"ContainerDied","Data":"faeb0ad20fd5371896364f6d5ea93bdb1c2b38d5264b1e1c4b020f4670c80472"} Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.076189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-qbqw9" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.076206 4766 scope.go:117] "RemoveContainer" containerID="00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.095773 4766 scope.go:117] "RemoveContainer" containerID="44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.100109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" podStartSLOduration=3.100094925 podStartE2EDuration="3.100094925s" podCreationTimestamp="2026-01-29 11:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:46.095925779 +0000 UTC m=+1483.208318790" watchObservedRunningTime="2026-01-29 11:45:46.100094925 +0000 UTC m=+1483.212487946" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.125961 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.126993 4766 scope.go:117] "RemoveContainer" containerID="00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a" Jan 29 11:45:46 crc kubenswrapper[4766]: E0129 11:45:46.127610 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a\": container with ID starting with 00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a not found: ID does not exist" containerID="00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.127651 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a"} err="failed to get container status \"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a\": rpc error: code = NotFound desc = could not find container \"00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a\": container with ID starting with 00cd9cbb9044041c1d19502388ddc057d50fd5aac838bd7889de4baec808e46a not found: ID does not exist" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.127677 4766 scope.go:117] "RemoveContainer" containerID="44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88" Jan 29 11:45:46 crc kubenswrapper[4766]: E0129 11:45:46.128151 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88\": container with ID starting with 44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88 not found: ID does not exist" containerID="44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.128261 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88"} err="failed to get container status \"44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88\": rpc error: code = NotFound desc = could not find container \"44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88\": container with ID starting with 44fa11775550b8acfbc9f0059652fe5452ce59462426369bfbeeeb9b1a99bf88 not found: ID does not exist" Jan 29 11:45:46 crc kubenswrapper[4766]: I0129 11:45:46.136136 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-qbqw9"] Jan 29 11:45:47 crc kubenswrapper[4766]: I0129 11:45:47.234521 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94583424-8f90-4738-a897-bad7f17a276d" path="/var/lib/kubelet/pods/94583424-8f90-4738-a897-bad7f17a276d/volumes" Jan 29 11:45:51 crc kubenswrapper[4766]: I0129 11:45:51.527589 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:45:51 crc kubenswrapper[4766]: I0129 11:45:51.812617 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.187294 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vh7hf"] Jan 29 11:45:53 crc kubenswrapper[4766]: E0129 11:45:53.188106 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="init" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.188124 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="init" Jan 29 11:45:53 crc kubenswrapper[4766]: E0129 11:45:53.188144 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="dnsmasq-dns" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.188153 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="dnsmasq-dns" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.188581 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="94583424-8f90-4738-a897-bad7f17a276d" containerName="dnsmasq-dns" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.189571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.265740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vh7hf"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.276480 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c583-account-create-update-czpwn"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.278046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.281107 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.295348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c583-account-create-update-czpwn"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.330805 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2b54-account-create-update-p9795"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.332068 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.335839 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.338453 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q9q6t"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.339535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.348995 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q9q6t"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.358953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2b54-account-create-update-p9795"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.364716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr7m\" (UniqueName: \"kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.364815 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.364931 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2jq\" (UniqueName: \"kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.364985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.449304 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-c4t2j"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.451613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.463878 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c4t2j"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.468782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr7m\" (UniqueName: \"kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6pp4\" (UniqueName: \"kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469440 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bdf\" (UniqueName: \"kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2jq\" (UniqueName: \"kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.469724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.470102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.470393 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.480330 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pvvng"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.481555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.485808 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.486424 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68trd" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.485815 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.485857 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.499578 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr7m\" (UniqueName: \"kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m\") pod \"barbican-c583-account-create-update-czpwn\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.502371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2jq\" (UniqueName: \"kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq\") pod \"cinder-db-create-vh7hf\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.509503 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pvvng"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.528010 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.569962 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c4c0-account-create-update-lrrxn"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571799 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571828 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dww\" (UniqueName: \"kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29bdf\" (UniqueName: \"kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwn7\" (UniqueName: \"kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.571973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.572004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6pp4\" (UniqueName: \"kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.573388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.575215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.575743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.580953 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.595234 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.605928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29bdf\" (UniqueName: \"kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf\") pod \"cinder-2b54-account-create-update-p9795\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.607252 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c4c0-account-create-update-lrrxn"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.609324 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6pp4\" (UniqueName: \"kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4\") pod \"barbican-db-create-q9q6t\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.649039 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.659755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681689 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dww\" (UniqueName: \"kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh72k\" (UniqueName: \"kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrwn7\" (UniqueName: \"kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.681966 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.682611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.688104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.692883 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.717644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrwn7\" (UniqueName: \"kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7\") pod \"neutron-db-create-c4t2j\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.723220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dww\" (UniqueName: \"kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww\") pod \"keystone-db-sync-pvvng\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.771922 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.783074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.783258 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh72k\" (UniqueName: \"kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.784112 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.790704 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.811076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh72k\" (UniqueName: \"kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k\") pod \"neutron-c4c0-account-create-update-lrrxn\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.849344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pvvng" Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.905907 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:53 crc kubenswrapper[4766]: I0129 11:45:53.907956 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="dnsmasq-dns" containerID="cri-o://a6e5ff7f659bca0a3ea5b18214bc4101db7a833041c25e66f73799878e50c43d" gracePeriod=10 Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.044670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.178716 4766 generic.go:334] "Generic (PLEG): container finished" podID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerID="a6e5ff7f659bca0a3ea5b18214bc4101db7a833041c25e66f73799878e50c43d" exitCode=0 Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.178776 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" event={"ID":"b055f99f-ca12-47e3-9448-240b2f46ccb3","Type":"ContainerDied","Data":"a6e5ff7f659bca0a3ea5b18214bc4101db7a833041c25e66f73799878e50c43d"} Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.334925 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vh7hf"] Jan 29 11:45:54 crc kubenswrapper[4766]: W0129 11:45:54.335644 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb7e1b9_9706_4112_88e1_6bd624f14680.slice/crio-8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af WatchSource:0}: Error finding container 8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af: Status 404 returned error can't find the container with id 8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.651790 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c583-account-create-update-czpwn"] Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.715300 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c4t2j"] Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.729473 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2b54-account-create-update-p9795"] Jan 29 11:45:54 crc kubenswrapper[4766]: W0129 11:45:54.748563 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18ea196_39f9_4cb4_b0f3_6ac9ec23b11b.slice/crio-c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd WatchSource:0}: Error finding container c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd: Status 404 returned error can't find the container with id c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.867877 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c4c0-account-create-update-lrrxn"] Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.923874 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pvvng"] Jan 29 11:45:54 crc kubenswrapper[4766]: W0129 11:45:54.928818 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbb8794_f929_4bc3_9fc4_fc1e8589691b.slice/crio-3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e WatchSource:0}: Error finding container 3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e: Status 404 returned error can't find the container with id 3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e Jan 29 11:45:54 crc kubenswrapper[4766]: I0129 11:45:54.944594 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q9q6t"] Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.066338 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.161994 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config\") pod \"b055f99f-ca12-47e3-9448-240b2f46ccb3\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.162091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb\") pod \"b055f99f-ca12-47e3-9448-240b2f46ccb3\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.162174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb\") pod \"b055f99f-ca12-47e3-9448-240b2f46ccb3\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.162270 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc\") pod \"b055f99f-ca12-47e3-9448-240b2f46ccb3\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.162350 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzp77\" (UniqueName: \"kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77\") pod \"b055f99f-ca12-47e3-9448-240b2f46ccb3\" (UID: \"b055f99f-ca12-47e3-9448-240b2f46ccb3\") " Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.175177 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77" (OuterVolumeSpecName: "kube-api-access-gzp77") pod "b055f99f-ca12-47e3-9448-240b2f46ccb3" (UID: "b055f99f-ca12-47e3-9448-240b2f46ccb3"). InnerVolumeSpecName "kube-api-access-gzp77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.207527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" event={"ID":"b055f99f-ca12-47e3-9448-240b2f46ccb3","Type":"ContainerDied","Data":"fc1a11b00cddbba14b395e9ec8d8012c4a97046bdfe9aa7e1c0fe143166465fc"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.207720 4766 scope.go:117] "RemoveContainer" containerID="a6e5ff7f659bca0a3ea5b18214bc4101db7a833041c25e66f73799878e50c43d" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.207941 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qlsth" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.217824 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pvvng" event={"ID":"3fbb8794-f929-4bc3-9fc4-fc1e8589691b","Type":"ContainerStarted","Data":"3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.219740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c583-account-create-update-czpwn" event={"ID":"d55d102c-fb19-40f9-be67-8234ec2232c4","Type":"ContainerStarted","Data":"b7e72beaf22658c2e8cba01afcdcc85729f545e1f1f6abf942d86470e5e22369"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.219864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c583-account-create-update-czpwn" event={"ID":"d55d102c-fb19-40f9-be67-8234ec2232c4","Type":"ContainerStarted","Data":"325f2727d12594e3a75561dc1aab639c06e7ca04bdd096c7fb811d4165d29880"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.233288 4766 generic.go:334] "Generic (PLEG): container finished" podID="feb7e1b9-9706-4112-88e1-6bd624f14680" containerID="c2a732eef5e758404b95f298f724b689625e87f72dd1bc97d1ce025f6bf657aa" exitCode=0 Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.252057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c583-account-create-update-czpwn" podStartSLOduration=2.252021561 podStartE2EDuration="2.252021561s" podCreationTimestamp="2026-01-29 11:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:55.245583573 +0000 UTC m=+1492.357976584" watchObservedRunningTime="2026-01-29 11:45:55.252021561 +0000 UTC m=+1492.364414582" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.265931 4766 scope.go:117] "RemoveContainer" containerID="3235d14f10e320a00952adb57cee9f433f625006dd05ba3746a723cd72eb3673" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.266689 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzp77\" (UniqueName: \"kubernetes.io/projected/b055f99f-ca12-47e3-9448-240b2f46ccb3-kube-api-access-gzp77\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.271351 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vh7hf" event={"ID":"feb7e1b9-9706-4112-88e1-6bd624f14680","Type":"ContainerDied","Data":"c2a732eef5e758404b95f298f724b689625e87f72dd1bc97d1ce025f6bf657aa"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.271399 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vh7hf" event={"ID":"feb7e1b9-9706-4112-88e1-6bd624f14680","Type":"ContainerStarted","Data":"8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.271461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c4t2j" event={"ID":"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b","Type":"ContainerStarted","Data":"833f367854feff5ffe5e8329ce997906add38460016ec6b032fbca38e61fe6d2"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.271479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c4t2j" event={"ID":"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b","Type":"ContainerStarted","Data":"c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.278734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q9q6t" event={"ID":"5a17ce00-9749-4f6d-8259-b25a78cdf8a7","Type":"ContainerStarted","Data":"1c45da3336b5e0b88f5d024afeef9ae9e5a62e614359e412969113c2fb245cb3"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.295092 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-p9795" event={"ID":"212d3fdc-eac2-4868-a017-878a6f0d3cea","Type":"ContainerStarted","Data":"ff71015a48cd41c90b8eaa7f1da6a4da595e58f31de96caaf15023c6a05581ea"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.295119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-p9795" event={"ID":"212d3fdc-eac2-4868-a017-878a6f0d3cea","Type":"ContainerStarted","Data":"d4814c4f257e7a705766618b4145ffb7ca903ee8bdcba515549169809f08e533"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.304382 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4c0-account-create-update-lrrxn" event={"ID":"80170965-93b1-41a5-8a4b-e0e3c87beda4","Type":"ContainerStarted","Data":"dff4eb6a70393a3143d9f6d06544d49203de4a9ecde3454b50d15e707b659e17"} Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.422022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b055f99f-ca12-47e3-9448-240b2f46ccb3" (UID: "b055f99f-ca12-47e3-9448-240b2f46ccb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.429347 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b055f99f-ca12-47e3-9448-240b2f46ccb3" (UID: "b055f99f-ca12-47e3-9448-240b2f46ccb3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.430905 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2b54-account-create-update-p9795" podStartSLOduration=2.430891112 podStartE2EDuration="2.430891112s" podCreationTimestamp="2026-01-29 11:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:55.42721421 +0000 UTC m=+1492.539607231" watchObservedRunningTime="2026-01-29 11:45:55.430891112 +0000 UTC m=+1492.543284123" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.431984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b055f99f-ca12-47e3-9448-240b2f46ccb3" (UID: "b055f99f-ca12-47e3-9448-240b2f46ccb3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.445028 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-c4t2j" podStartSLOduration=2.445008554 podStartE2EDuration="2.445008554s" podCreationTimestamp="2026-01-29 11:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:55.443358428 +0000 UTC m=+1492.555751439" watchObservedRunningTime="2026-01-29 11:45:55.445008554 +0000 UTC m=+1492.557401565" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.446958 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config" (OuterVolumeSpecName: "config") pod "b055f99f-ca12-47e3-9448-240b2f46ccb3" (UID: "b055f99f-ca12-47e3-9448-240b2f46ccb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.461827 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c4c0-account-create-update-lrrxn" podStartSLOduration=2.461807449 podStartE2EDuration="2.461807449s" podCreationTimestamp="2026-01-29 11:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:45:55.456804721 +0000 UTC m=+1492.569197742" watchObservedRunningTime="2026-01-29 11:45:55.461807449 +0000 UTC m=+1492.574200470" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.473161 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.473195 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.473203 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.473213 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b055f99f-ca12-47e3-9448-240b2f46ccb3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.849502 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:55 crc kubenswrapper[4766]: I0129 11:45:55.857384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qlsth"] Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.356007 4766 generic.go:334] "Generic (PLEG): container finished" podID="d55d102c-fb19-40f9-be67-8234ec2232c4" containerID="b7e72beaf22658c2e8cba01afcdcc85729f545e1f1f6abf942d86470e5e22369" exitCode=0 Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.356354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c583-account-create-update-czpwn" event={"ID":"d55d102c-fb19-40f9-be67-8234ec2232c4","Type":"ContainerDied","Data":"b7e72beaf22658c2e8cba01afcdcc85729f545e1f1f6abf942d86470e5e22369"} Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.368820 4766 generic.go:334] "Generic (PLEG): container finished" podID="d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" containerID="833f367854feff5ffe5e8329ce997906add38460016ec6b032fbca38e61fe6d2" exitCode=0 Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.368924 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c4t2j" event={"ID":"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b","Type":"ContainerDied","Data":"833f367854feff5ffe5e8329ce997906add38460016ec6b032fbca38e61fe6d2"} Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.370150 4766 generic.go:334] "Generic (PLEG): container finished" podID="5a17ce00-9749-4f6d-8259-b25a78cdf8a7" containerID="8393252b4e7275a9df8ff86e89fa4bd74cece45a96b553385aa19ae061de0cca" exitCode=0 Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.370191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q9q6t" event={"ID":"5a17ce00-9749-4f6d-8259-b25a78cdf8a7","Type":"ContainerDied","Data":"8393252b4e7275a9df8ff86e89fa4bd74cece45a96b553385aa19ae061de0cca"} Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.374310 4766 generic.go:334] "Generic (PLEG): container finished" podID="212d3fdc-eac2-4868-a017-878a6f0d3cea" containerID="ff71015a48cd41c90b8eaa7f1da6a4da595e58f31de96caaf15023c6a05581ea" exitCode=0 Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.374371 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-p9795" event={"ID":"212d3fdc-eac2-4868-a017-878a6f0d3cea","Type":"ContainerDied","Data":"ff71015a48cd41c90b8eaa7f1da6a4da595e58f31de96caaf15023c6a05581ea"} Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.375645 4766 generic.go:334] "Generic (PLEG): container finished" podID="80170965-93b1-41a5-8a4b-e0e3c87beda4" containerID="c6589f9f2c3d11e7c6da1ef88d0204322056c915176e7f812a7d04c2e1080a29" exitCode=0 Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.376600 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4c0-account-create-update-lrrxn" event={"ID":"80170965-93b1-41a5-8a4b-e0e3c87beda4","Type":"ContainerDied","Data":"c6589f9f2c3d11e7c6da1ef88d0204322056c915176e7f812a7d04c2e1080a29"} Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.849844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.902728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb2jq\" (UniqueName: \"kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq\") pod \"feb7e1b9-9706-4112-88e1-6bd624f14680\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.902889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts\") pod \"feb7e1b9-9706-4112-88e1-6bd624f14680\" (UID: \"feb7e1b9-9706-4112-88e1-6bd624f14680\") " Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.903812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb7e1b9-9706-4112-88e1-6bd624f14680" (UID: "feb7e1b9-9706-4112-88e1-6bd624f14680"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:56 crc kubenswrapper[4766]: I0129 11:45:56.908732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq" (OuterVolumeSpecName: "kube-api-access-rb2jq") pod "feb7e1b9-9706-4112-88e1-6bd624f14680" (UID: "feb7e1b9-9706-4112-88e1-6bd624f14680"). InnerVolumeSpecName "kube-api-access-rb2jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.004693 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb2jq\" (UniqueName: \"kubernetes.io/projected/feb7e1b9-9706-4112-88e1-6bd624f14680-kube-api-access-rb2jq\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.004733 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7e1b9-9706-4112-88e1-6bd624f14680-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.239938 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" path="/var/lib/kubelet/pods/b055f99f-ca12-47e3-9448-240b2f46ccb3/volumes" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.385175 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vh7hf" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.385677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vh7hf" event={"ID":"feb7e1b9-9706-4112-88e1-6bd624f14680","Type":"ContainerDied","Data":"8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af"} Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.385760 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dd6a04ad8062d44b5e3a321feeaa01b2d1976b055cc29a64c4bb484315417af" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.771729 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.818116 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmr7m\" (UniqueName: \"kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m\") pod \"d55d102c-fb19-40f9-be67-8234ec2232c4\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.818318 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts\") pod \"d55d102c-fb19-40f9-be67-8234ec2232c4\" (UID: \"d55d102c-fb19-40f9-be67-8234ec2232c4\") " Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.819317 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d55d102c-fb19-40f9-be67-8234ec2232c4" (UID: "d55d102c-fb19-40f9-be67-8234ec2232c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.822272 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m" (OuterVolumeSpecName: "kube-api-access-gmr7m") pod "d55d102c-fb19-40f9-be67-8234ec2232c4" (UID: "d55d102c-fb19-40f9-be67-8234ec2232c4"). InnerVolumeSpecName "kube-api-access-gmr7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.919817 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55d102c-fb19-40f9-be67-8234ec2232c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:57 crc kubenswrapper[4766]: I0129 11:45:57.920073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmr7m\" (UniqueName: \"kubernetes.io/projected/d55d102c-fb19-40f9-be67-8234ec2232c4-kube-api-access-gmr7m\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.001174 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.005259 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.020816 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts\") pod \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.020873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts\") pod \"80170965-93b1-41a5-8a4b-e0e3c87beda4\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.020918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6pp4\" (UniqueName: \"kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4\") pod \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\" (UID: \"5a17ce00-9749-4f6d-8259-b25a78cdf8a7\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.020957 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh72k\" (UniqueName: \"kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k\") pod \"80170965-93b1-41a5-8a4b-e0e3c87beda4\" (UID: \"80170965-93b1-41a5-8a4b-e0e3c87beda4\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.021757 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80170965-93b1-41a5-8a4b-e0e3c87beda4" (UID: "80170965-93b1-41a5-8a4b-e0e3c87beda4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.021968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a17ce00-9749-4f6d-8259-b25a78cdf8a7" (UID: "5a17ce00-9749-4f6d-8259-b25a78cdf8a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.022300 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.022330 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80170965-93b1-41a5-8a4b-e0e3c87beda4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.031503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4" (OuterVolumeSpecName: "kube-api-access-r6pp4") pod "5a17ce00-9749-4f6d-8259-b25a78cdf8a7" (UID: "5a17ce00-9749-4f6d-8259-b25a78cdf8a7"). InnerVolumeSpecName "kube-api-access-r6pp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.044560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k" (OuterVolumeSpecName: "kube-api-access-nh72k") pod "80170965-93b1-41a5-8a4b-e0e3c87beda4" (UID: "80170965-93b1-41a5-8a4b-e0e3c87beda4"). InnerVolumeSpecName "kube-api-access-nh72k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.121578 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.123515 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6pp4\" (UniqueName: \"kubernetes.io/projected/5a17ce00-9749-4f6d-8259-b25a78cdf8a7-kube-api-access-r6pp4\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.123541 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh72k\" (UniqueName: \"kubernetes.io/projected/80170965-93b1-41a5-8a4b-e0e3c87beda4-kube-api-access-nh72k\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.127200 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.224214 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrwn7\" (UniqueName: \"kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7\") pod \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.224408 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts\") pod \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\" (UID: \"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.224528 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts\") pod \"212d3fdc-eac2-4868-a017-878a6f0d3cea\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.224560 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29bdf\" (UniqueName: \"kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf\") pod \"212d3fdc-eac2-4868-a017-878a6f0d3cea\" (UID: \"212d3fdc-eac2-4868-a017-878a6f0d3cea\") " Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.226127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" (UID: "d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.226858 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "212d3fdc-eac2-4868-a017-878a6f0d3cea" (UID: "212d3fdc-eac2-4868-a017-878a6f0d3cea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.228323 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf" (OuterVolumeSpecName: "kube-api-access-29bdf") pod "212d3fdc-eac2-4868-a017-878a6f0d3cea" (UID: "212d3fdc-eac2-4868-a017-878a6f0d3cea"). InnerVolumeSpecName "kube-api-access-29bdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.229251 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7" (OuterVolumeSpecName: "kube-api-access-wrwn7") pod "d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" (UID: "d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b"). InnerVolumeSpecName "kube-api-access-wrwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.326792 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.326845 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/212d3fdc-eac2-4868-a017-878a6f0d3cea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.326858 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29bdf\" (UniqueName: \"kubernetes.io/projected/212d3fdc-eac2-4868-a017-878a6f0d3cea-kube-api-access-29bdf\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.326872 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrwn7\" (UniqueName: \"kubernetes.io/projected/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b-kube-api-access-wrwn7\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.394648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4c0-account-create-update-lrrxn" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.396506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4c0-account-create-update-lrrxn" event={"ID":"80170965-93b1-41a5-8a4b-e0e3c87beda4","Type":"ContainerDied","Data":"dff4eb6a70393a3143d9f6d06544d49203de4a9ecde3454b50d15e707b659e17"} Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.396549 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff4eb6a70393a3143d9f6d06544d49203de4a9ecde3454b50d15e707b659e17" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.398722 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c583-account-create-update-czpwn" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.399132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c583-account-create-update-czpwn" event={"ID":"d55d102c-fb19-40f9-be67-8234ec2232c4","Type":"ContainerDied","Data":"325f2727d12594e3a75561dc1aab639c06e7ca04bdd096c7fb811d4165d29880"} Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.399158 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325f2727d12594e3a75561dc1aab639c06e7ca04bdd096c7fb811d4165d29880" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.405075 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c4t2j" event={"ID":"d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b","Type":"ContainerDied","Data":"c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd"} Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.405129 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5685675ec5b4289acf84d87fb8eaf8b49c96c6076917d0d0027a7930615b9dd" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.405087 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c4t2j" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.415203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q9q6t" event={"ID":"5a17ce00-9749-4f6d-8259-b25a78cdf8a7","Type":"ContainerDied","Data":"1c45da3336b5e0b88f5d024afeef9ae9e5a62e614359e412969113c2fb245cb3"} Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.415223 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q9q6t" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.415234 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c45da3336b5e0b88f5d024afeef9ae9e5a62e614359e412969113c2fb245cb3" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.421007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-p9795" event={"ID":"212d3fdc-eac2-4868-a017-878a6f0d3cea","Type":"ContainerDied","Data":"d4814c4f257e7a705766618b4145ffb7ca903ee8bdcba515549169809f08e533"} Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.421054 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4814c4f257e7a705766618b4145ffb7ca903ee8bdcba515549169809f08e533" Jan 29 11:45:58 crc kubenswrapper[4766]: I0129 11:45:58.421106 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-p9795" Jan 29 11:46:02 crc kubenswrapper[4766]: I0129 11:46:02.453747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pvvng" event={"ID":"3fbb8794-f929-4bc3-9fc4-fc1e8589691b","Type":"ContainerStarted","Data":"14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369"} Jan 29 11:46:02 crc kubenswrapper[4766]: I0129 11:46:02.475221 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pvvng" podStartSLOduration=2.8327630470000003 podStartE2EDuration="9.475202187s" podCreationTimestamp="2026-01-29 11:45:53 +0000 UTC" firstStartedPulling="2026-01-29 11:45:54.936345787 +0000 UTC m=+1492.048738798" lastFinishedPulling="2026-01-29 11:46:01.578784927 +0000 UTC m=+1498.691177938" observedRunningTime="2026-01-29 11:46:02.4670138 +0000 UTC m=+1499.579406821" watchObservedRunningTime="2026-01-29 11:46:02.475202187 +0000 UTC m=+1499.587595198" Jan 29 11:46:04 crc kubenswrapper[4766]: E0129 11:46:04.927927 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbb8794_f929_4bc3_9fc4_fc1e8589691b.slice/crio-conmon-14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbb8794_f929_4bc3_9fc4_fc1e8589691b.slice/crio-14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:46:05 crc kubenswrapper[4766]: I0129 11:46:05.477634 4766 generic.go:334] "Generic (PLEG): container finished" podID="3fbb8794-f929-4bc3-9fc4-fc1e8589691b" containerID="14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369" exitCode=0 Jan 29 11:46:05 crc kubenswrapper[4766]: I0129 11:46:05.477724 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pvvng" event={"ID":"3fbb8794-f929-4bc3-9fc4-fc1e8589691b","Type":"ContainerDied","Data":"14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369"} Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.827949 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pvvng" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.870219 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data\") pod \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.870267 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dww\" (UniqueName: \"kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww\") pod \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.870490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle\") pod \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\" (UID: \"3fbb8794-f929-4bc3-9fc4-fc1e8589691b\") " Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.876377 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww" (OuterVolumeSpecName: "kube-api-access-59dww") pod "3fbb8794-f929-4bc3-9fc4-fc1e8589691b" (UID: "3fbb8794-f929-4bc3-9fc4-fc1e8589691b"). InnerVolumeSpecName "kube-api-access-59dww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.898734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fbb8794-f929-4bc3-9fc4-fc1e8589691b" (UID: "3fbb8794-f929-4bc3-9fc4-fc1e8589691b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.914502 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data" (OuterVolumeSpecName: "config-data") pod "3fbb8794-f929-4bc3-9fc4-fc1e8589691b" (UID: "3fbb8794-f929-4bc3-9fc4-fc1e8589691b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.971988 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.972019 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dww\" (UniqueName: \"kubernetes.io/projected/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-kube-api-access-59dww\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:06 crc kubenswrapper[4766]: I0129 11:46:06.972034 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb8794-f929-4bc3-9fc4-fc1e8589691b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.496485 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pvvng" event={"ID":"3fbb8794-f929-4bc3-9fc4-fc1e8589691b","Type":"ContainerDied","Data":"3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e"} Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.496780 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3612e08f6f717cf055606feb50755a027b988972ec3d8df920776dee39fb1b4e" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.496533 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pvvng" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.776805 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dk645"] Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777294 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777323 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777344 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbb8794-f929-4bc3-9fc4-fc1e8589691b" containerName="keystone-db-sync" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbb8794-f929-4bc3-9fc4-fc1e8589691b" containerName="keystone-db-sync" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777375 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a17ce00-9749-4f6d-8259-b25a78cdf8a7" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777383 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a17ce00-9749-4f6d-8259-b25a78cdf8a7" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777403 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="dnsmasq-dns" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777428 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="dnsmasq-dns" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777448 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="init" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777455 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="init" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777478 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7e1b9-9706-4112-88e1-6bd624f14680" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777487 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7e1b9-9706-4112-88e1-6bd624f14680" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777503 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212d3fdc-eac2-4868-a017-878a6f0d3cea" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777511 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="212d3fdc-eac2-4868-a017-878a6f0d3cea" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777548 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80170965-93b1-41a5-8a4b-e0e3c87beda4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777556 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="80170965-93b1-41a5-8a4b-e0e3c87beda4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: E0129 11:46:07.777585 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55d102c-fb19-40f9-be67-8234ec2232c4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777595 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55d102c-fb19-40f9-be67-8234ec2232c4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777777 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="80170965-93b1-41a5-8a4b-e0e3c87beda4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777812 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a17ce00-9749-4f6d-8259-b25a78cdf8a7" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777821 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb7e1b9-9706-4112-88e1-6bd624f14680" containerName="mariadb-database-create" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777834 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbb8794-f929-4bc3-9fc4-fc1e8589691b" containerName="keystone-db-sync" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777848 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b055f99f-ca12-47e3-9448-240b2f46ccb3" containerName="dnsmasq-dns" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777860 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="212d3fdc-eac2-4868-a017-878a6f0d3cea" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.777872 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55d102c-fb19-40f9-be67-8234ec2232c4" containerName="mariadb-account-create-update" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.779996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783721 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783795 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7kvk\" (UniqueName: \"kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783877 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.783920 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.792182 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk645"] Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.807751 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.807952 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.808090 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68trd" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.808286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.808387 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.817741 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.819924 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.838593 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888259 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888336 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7kvk\" (UniqueName: \"kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888549 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888626 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888704 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888776 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.888978 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.889023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.904045 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.904515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.907841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.908467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.909302 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.934873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7kvk\" (UniqueName: \"kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk\") pod \"keystone-bootstrap-dk645\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.989211 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wc899"] Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.992621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.992713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.992757 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.992964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.992988 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.993045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.993789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.994019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.994567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.994611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:07 crc kubenswrapper[4766]: I0129 11:46:07.995221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.000878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.018808 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cd22x" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.019131 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.024441 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.039203 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wc899"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.052850 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx\") pod \"dnsmasq-dns-bbf5cc879-bthzf\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100335 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgstc\" (UniqueName: \"kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100494 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.100617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.150851 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.157303 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.159124 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.165863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.166094 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.170357 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.195163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206111 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206179 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgstc\" (UniqueName: \"kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206273 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206448 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wwp\" (UniqueName: \"kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.206472 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.220863 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.221247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.221302 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.229394 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.230895 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.270670 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tdq67"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.271741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.273116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgstc\" (UniqueName: \"kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc\") pod \"cinder-db-sync-wc899\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.274809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.275092 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.275259 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8j7c7" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzm27\" (UniqueName: \"kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310351 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310442 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310502 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310526 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310575 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310628 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310656 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5wwp\" (UniqueName: \"kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.310770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.314799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.316386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.317982 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.318517 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4mnfs"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.319711 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.331872 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.332100 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.334953 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wc899" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.354282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5wwp\" (UniqueName: \"kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.356719 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.356823 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dw4jf" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.357394 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.358220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.412403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.412775 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.412829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.412856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.412998 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzm27\" (UniqueName: \"kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.414949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.421143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.425842 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.439789 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tdq67"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.454059 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.472081 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.484032 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzm27\" (UniqueName: \"kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27\") pod \"placement-db-sync-tdq67\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.486052 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4mnfs"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.526757 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.526888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.527024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbd2s\" (UniqueName: \"kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.580731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.584071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.590425 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4bqsv"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.591622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.591891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.593397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.595222 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6pfwh" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.606615 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.626422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbd2s\" (UniqueName: \"kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629586 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629659 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629692 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhq4n\" (UniqueName: \"kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629715 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.629760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8ss\" (UniqueName: \"kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.633468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4bqsv"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.663204 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.663236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.671787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbd2s\" (UniqueName: \"kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s\") pod \"neutron-db-sync-4mnfs\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.695534 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730831 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhq4n\" (UniqueName: \"kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.730977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8ss\" (UniqueName: \"kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.731992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.733703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.734542 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.735119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.735543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.736998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.737437 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.755397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhq4n\" (UniqueName: \"kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n\") pod \"dnsmasq-dns-56df8fb6b7-hldhd\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.755652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8ss\" (UniqueName: \"kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss\") pod \"barbican-db-sync-4bqsv\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.887257 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.888674 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.892046 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.892465 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.892767 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.893600 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z2vhg" Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.898293 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:08 crc kubenswrapper[4766]: I0129 11:46:08.989276 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.000013 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.036131 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.038984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039057 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039090 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039273 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6nk\" (UniqueName: \"kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039343 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.039568 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.049141 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.051987 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.054974 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.060440 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.075929 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.086274 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:09 crc kubenswrapper[4766]: W0129 11:46:09.087579 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbffce56_9cb1_4eb4_bdb2_bb78e7c7ca27.slice/crio-ba0684d1804df1ee741ca5461adacf91652878b0e909152100cb84b4ae332f09 WatchSource:0}: Error finding container ba0684d1804df1ee741ca5461adacf91652878b0e909152100cb84b4ae332f09: Status 404 returned error can't find the container with id ba0684d1804df1ee741ca5461adacf91652878b0e909152100cb84b4ae332f09 Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.101089 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wc899"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.141693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6nk\" (UniqueName: \"kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.141888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142325 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142440 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142580 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142691 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142791 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.142623 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.143774 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.144033 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.148354 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.148508 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.150405 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.177646 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6nk\" (UniqueName: \"kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.181938 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.192579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.246239 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk645"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248025 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248337 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248712 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.248785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4v4\" (UniqueName: \"kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: W0129 11:46:09.251079 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd3cee91_47c6_4839_862b_d1c3fff0fd6b.slice/crio-b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1 WatchSource:0}: Error finding container b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1: Status 404 returned error can't find the container with id b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1 Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.279977 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351317 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351448 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4v4\" (UniqueName: \"kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351639 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.351708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.353741 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.353797 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.353897 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.369247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.369511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.381362 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.389135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4v4\" (UniqueName: \"kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.390262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.418631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.488437 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.500910 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tdq67"] Jan 29 11:46:09 crc kubenswrapper[4766]: W0129 11:46:09.508092 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16bc3c63_cee9_4f14_82bf_2f912e65cf14.slice/crio-e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc WatchSource:0}: Error finding container e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc: Status 404 returned error can't find the container with id e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.509390 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4mnfs"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.548920 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.556887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tdq67" event={"ID":"db09eba3-8fd8-4448-8e6c-2819328ac301","Type":"ContainerStarted","Data":"bda017599b729bf24b57b5e457afcc4209b20199d18e9f6e820de84637f5b1cb"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.561664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" event={"ID":"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27","Type":"ContainerStarted","Data":"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.561706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" event={"ID":"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27","Type":"ContainerStarted","Data":"ba0684d1804df1ee741ca5461adacf91652878b0e909152100cb84b4ae332f09"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.564705 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk645" event={"ID":"cd3cee91-47c6-4839-862b-d1c3fff0fd6b","Type":"ContainerStarted","Data":"fe3e8be97badf0ba50b26051af0565eb983bd7430bea4d09ce0b37e4f5910f20"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.564752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk645" event={"ID":"cd3cee91-47c6-4839-862b-d1c3fff0fd6b","Type":"ContainerStarted","Data":"b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.567083 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4mnfs" event={"ID":"16bc3c63-cee9-4f14-82bf-2f912e65cf14","Type":"ContainerStarted","Data":"e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.568216 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerStarted","Data":"ee1cb9190491ff520aab9329ae373712edbf3b4772b5cec43c5cf1188f7ed0c5"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.569273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wc899" event={"ID":"e2d775c8-398d-45dd-aea7-2c2bc050e040","Type":"ContainerStarted","Data":"3fe5b611a3c0a15393a1c08ec858871335b27854a53e378458f0176bbfbc3cae"} Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.611991 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dk645" podStartSLOduration=2.611971736 podStartE2EDuration="2.611971736s" podCreationTimestamp="2026-01-29 11:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:09.608009146 +0000 UTC m=+1506.720402157" watchObservedRunningTime="2026-01-29 11:46:09.611971736 +0000 UTC m=+1506.724364747" Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.705401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4bqsv"] Jan 29 11:46:09 crc kubenswrapper[4766]: I0129 11:46:09.753965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.026364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.202777 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.274642 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.274769 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.274849 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.274870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.274937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.275064 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc\") pod \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\" (UID: \"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27\") " Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.276455 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:10 crc kubenswrapper[4766]: W0129 11:46:10.278435 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafbf1601_e667_43b8_8e24_a573909e1e10.slice/crio-a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5 WatchSource:0}: Error finding container a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5: Status 404 returned error can't find the container with id a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5 Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.284258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx" (OuterVolumeSpecName: "kube-api-access-cq9zx") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "kube-api-access-cq9zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.308086 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.312501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config" (OuterVolumeSpecName: "config") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.318340 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.342702 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.345799 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" (UID: "dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378097 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378213 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-kube-api-access-cq9zx\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378234 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378245 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378257 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.378297 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.584223 4766 generic.go:334] "Generic (PLEG): container finished" podID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerID="03a3dcb42c888e6846b8f5d199f135ffb445f08c9ab827f2c9921b65405ef94a" exitCode=0 Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.584433 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" event={"ID":"e21d6c49-ac47-47dc-9515-2ff0e5e04f31","Type":"ContainerDied","Data":"03a3dcb42c888e6846b8f5d199f135ffb445f08c9ab827f2c9921b65405ef94a"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.584616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" event={"ID":"e21d6c49-ac47-47dc-9515-2ff0e5e04f31","Type":"ContainerStarted","Data":"ec91be07f9497483e1a1392af1be16acea81aec90c16b0a4a2871daa36c65672"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.587017 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerStarted","Data":"a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.597229 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerStarted","Data":"be29bf8edea0b8cd9378047d1d93d1c5699ec802c3da6f8916259019459af147"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.611497 4766 generic.go:334] "Generic (PLEG): container finished" podID="dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" containerID="80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176" exitCode=0 Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.611598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" event={"ID":"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27","Type":"ContainerDied","Data":"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.611629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" event={"ID":"dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27","Type":"ContainerDied","Data":"ba0684d1804df1ee741ca5461adacf91652878b0e909152100cb84b4ae332f09"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.611648 4766 scope.go:117] "RemoveContainer" containerID="80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.611768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bthzf" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.623766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4mnfs" event={"ID":"16bc3c63-cee9-4f14-82bf-2f912e65cf14","Type":"ContainerStarted","Data":"6178e29f3f97ee21ec4cf7acfdfb1b895e6b1f01bc50ed4550a76d923def4120"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.636743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4bqsv" event={"ID":"cc11389e-2508-468a-b9ec-25acfbde9046","Type":"ContainerStarted","Data":"54da3e2b1fa9c3e0ded6eb32952f427687439fc1af3417381a7b8a1bc7bfe49c"} Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.658205 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4mnfs" podStartSLOduration=2.658185771 podStartE2EDuration="2.658185771s" podCreationTimestamp="2026-01-29 11:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:10.648692818 +0000 UTC m=+1507.761085829" watchObservedRunningTime="2026-01-29 11:46:10.658185771 +0000 UTC m=+1507.770578792" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.708463 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.719291 4766 scope.go:117] "RemoveContainer" containerID="80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.720822 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bthzf"] Jan 29 11:46:10 crc kubenswrapper[4766]: E0129 11:46:10.721863 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176\": container with ID starting with 80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176 not found: ID does not exist" containerID="80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.721898 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176"} err="failed to get container status \"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176\": rpc error: code = NotFound desc = could not find container \"80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176\": container with ID starting with 80d06f448e700d149a870018a25ac7b88e08985480ac511974cd2c8fc6952176 not found: ID does not exist" Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.829775 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.879534 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:46:10 crc kubenswrapper[4766]: I0129 11:46:10.937742 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.242737 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" path="/var/lib/kubelet/pods/dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27/volumes" Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.652397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerStarted","Data":"fc4f04f3d722dbefac17815a0008454d8272c06cc63c1920d1e7e3aacd16bce6"} Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.654981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerStarted","Data":"d786e0405320809a96af346c35692c438ad7742c08fa3ce17122a0f104fafaa5"} Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.666094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" event={"ID":"e21d6c49-ac47-47dc-9515-2ff0e5e04f31","Type":"ContainerStarted","Data":"cbecd2dd6f5c9fffbc21fc983d684baddd2a90e8023527a627160d7c5f7ee6e2"} Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.666155 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:11 crc kubenswrapper[4766]: I0129 11:46:11.686675 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" podStartSLOduration=3.686650384 podStartE2EDuration="3.686650384s" podCreationTimestamp="2026-01-29 11:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:11.686470489 +0000 UTC m=+1508.798863500" watchObservedRunningTime="2026-01-29 11:46:11.686650384 +0000 UTC m=+1508.799043385" Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.691578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerStarted","Data":"e338a8fb60dfe5f32593a50ae289d0ea1611a14385380bb0080e86f210f7ecab"} Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.691682 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-log" containerID="cri-o://d786e0405320809a96af346c35692c438ad7742c08fa3ce17122a0f104fafaa5" gracePeriod=30 Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.691705 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-httpd" containerID="cri-o://e338a8fb60dfe5f32593a50ae289d0ea1611a14385380bb0080e86f210f7ecab" gracePeriod=30 Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.693974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerStarted","Data":"461ce70085db96b14a15b462a3d528cfd68d461fe66c27a696d2569531d25c4c"} Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.693982 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-log" containerID="cri-o://fc4f04f3d722dbefac17815a0008454d8272c06cc63c1920d1e7e3aacd16bce6" gracePeriod=30 Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.694018 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-httpd" containerID="cri-o://461ce70085db96b14a15b462a3d528cfd68d461fe66c27a696d2569531d25c4c" gracePeriod=30 Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.721157 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.721130865 podStartE2EDuration="5.721130865s" podCreationTimestamp="2026-01-29 11:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:12.708766132 +0000 UTC m=+1509.821159143" watchObservedRunningTime="2026-01-29 11:46:12.721130865 +0000 UTC m=+1509.833523876" Jan 29 11:46:12 crc kubenswrapper[4766]: I0129 11:46:12.747497 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.747478996 podStartE2EDuration="5.747478996s" podCreationTimestamp="2026-01-29 11:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:12.73719625 +0000 UTC m=+1509.849589261" watchObservedRunningTime="2026-01-29 11:46:12.747478996 +0000 UTC m=+1509.859872007" Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.708345 4766 generic.go:334] "Generic (PLEG): container finished" podID="afbf1601-e667-43b8-8e24-a573909e1e10" containerID="e338a8fb60dfe5f32593a50ae289d0ea1611a14385380bb0080e86f210f7ecab" exitCode=0 Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.708666 4766 generic.go:334] "Generic (PLEG): container finished" podID="afbf1601-e667-43b8-8e24-a573909e1e10" containerID="d786e0405320809a96af346c35692c438ad7742c08fa3ce17122a0f104fafaa5" exitCode=143 Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.708450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerDied","Data":"e338a8fb60dfe5f32593a50ae289d0ea1611a14385380bb0080e86f210f7ecab"} Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.708734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerDied","Data":"d786e0405320809a96af346c35692c438ad7742c08fa3ce17122a0f104fafaa5"} Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.711638 4766 generic.go:334] "Generic (PLEG): container finished" podID="a395b791-8644-48a9-8a21-98242ae82b14" containerID="461ce70085db96b14a15b462a3d528cfd68d461fe66c27a696d2569531d25c4c" exitCode=0 Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.711657 4766 generic.go:334] "Generic (PLEG): container finished" podID="a395b791-8644-48a9-8a21-98242ae82b14" containerID="fc4f04f3d722dbefac17815a0008454d8272c06cc63c1920d1e7e3aacd16bce6" exitCode=143 Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.711736 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerDied","Data":"461ce70085db96b14a15b462a3d528cfd68d461fe66c27a696d2569531d25c4c"} Jan 29 11:46:13 crc kubenswrapper[4766]: I0129 11:46:13.711783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerDied","Data":"fc4f04f3d722dbefac17815a0008454d8272c06cc63c1920d1e7e3aacd16bce6"} Jan 29 11:46:15 crc kubenswrapper[4766]: I0129 11:46:15.750682 4766 generic.go:334] "Generic (PLEG): container finished" podID="cd3cee91-47c6-4839-862b-d1c3fff0fd6b" containerID="fe3e8be97badf0ba50b26051af0565eb983bd7430bea4d09ce0b37e4f5910f20" exitCode=0 Jan 29 11:46:15 crc kubenswrapper[4766]: I0129 11:46:15.750937 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk645" event={"ID":"cd3cee91-47c6-4839-862b-d1c3fff0fd6b","Type":"ContainerDied","Data":"fe3e8be97badf0ba50b26051af0565eb983bd7430bea4d09ce0b37e4f5910f20"} Jan 29 11:46:18 crc kubenswrapper[4766]: I0129 11:46:18.991983 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.049037 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.049512 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" containerID="cri-o://b701ef44e347feb43c14cfc6e87ee771ec0e9ba2936937403c6f2c8f306e1c2c" gracePeriod=10 Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.787191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afbf1601-e667-43b8-8e24-a573909e1e10","Type":"ContainerDied","Data":"a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5"} Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.787232 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ad30e5242f96fc01fc7eacc0ab7c1c9328dc5b96324396a0378678771991f5" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.789261 4766 generic.go:334] "Generic (PLEG): container finished" podID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerID="b701ef44e347feb43c14cfc6e87ee771ec0e9ba2936937403c6f2c8f306e1c2c" exitCode=0 Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.789298 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" event={"ID":"da092d08-0c97-45e3-8d8a-162c6a00d827","Type":"ContainerDied","Data":"b701ef44e347feb43c14cfc6e87ee771ec0e9ba2936937403c6f2c8f306e1c2c"} Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.838161 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.991759 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4v4\" (UniqueName: \"kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992233 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992258 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992281 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992350 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992431 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.992501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs\") pod \"afbf1601-e667-43b8-8e24-a573909e1e10\" (UID: \"afbf1601-e667-43b8-8e24-a573909e1e10\") " Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.993022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs" (OuterVolumeSpecName: "logs") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.993240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.993263 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.997478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:46:19 crc kubenswrapper[4766]: I0129 11:46:19.997759 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts" (OuterVolumeSpecName: "scripts") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.001692 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4" (OuterVolumeSpecName: "kube-api-access-8n4v4") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "kube-api-access-8n4v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.018771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.041846 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.046297 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data" (OuterVolumeSpecName: "config-data") pod "afbf1601-e667-43b8-8e24-a573909e1e10" (UID: "afbf1601-e667-43b8-8e24-a573909e1e10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095669 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095753 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4v4\" (UniqueName: \"kubernetes.io/projected/afbf1601-e667-43b8-8e24-a573909e1e10-kube-api-access-8n4v4\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095768 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095778 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095787 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afbf1601-e667-43b8-8e24-a573909e1e10-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095794 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.095802 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afbf1601-e667-43b8-8e24-a573909e1e10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.112507 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.196940 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.797696 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.833686 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.851706 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882118 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:20 crc kubenswrapper[4766]: E0129 11:46:20.882500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-log" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882513 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-log" Jan 29 11:46:20 crc kubenswrapper[4766]: E0129 11:46:20.882528 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-httpd" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882534 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-httpd" Jan 29 11:46:20 crc kubenswrapper[4766]: E0129 11:46:20.882564 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" containerName="init" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882571 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" containerName="init" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882828 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-httpd" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882845 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" containerName="glance-log" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.882863 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbffce56-9cb1-4eb4-bdb2-bb78e7c7ca27" containerName="init" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.883730 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.883806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.920269 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:46:20 crc kubenswrapper[4766]: I0129 11:46:20.920633 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.030857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031131 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031181 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n89wm\" (UniqueName: \"kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031437 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.031923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133450 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133535 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.133808 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.136759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.137142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.137309 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n89wm\" (UniqueName: \"kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.137518 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.137832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.142088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.142168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.145322 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.156158 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.176267 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n89wm\" (UniqueName: \"kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.176658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.248881 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.249836 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbf1601-e667-43b8-8e24-a573909e1e10" path="/var/lib/kubelet/pods/afbf1601-e667-43b8-8e24-a573909e1e10/volumes" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.598415 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758251 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7kvk\" (UniqueName: \"kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758339 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.758387 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts\") pod \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\" (UID: \"cd3cee91-47c6-4839-862b-d1c3fff0fd6b\") " Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.764920 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.766632 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts" (OuterVolumeSpecName: "scripts") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.766647 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.771690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk" (OuterVolumeSpecName: "kube-api-access-j7kvk") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "kube-api-access-j7kvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.786515 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data" (OuterVolumeSpecName: "config-data") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.791150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd3cee91-47c6-4839-862b-d1c3fff0fd6b" (UID: "cd3cee91-47c6-4839-862b-d1c3fff0fd6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.807824 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk645" event={"ID":"cd3cee91-47c6-4839-862b-d1c3fff0fd6b","Type":"ContainerDied","Data":"b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1"} Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.807871 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62df34b8240db66c51f7abd0db0e3c428e51deb12072f1cab0934788f3e31a1" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.807956 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk645" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860815 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860850 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860859 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860868 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860877 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:21 crc kubenswrapper[4766]: I0129 11:46:21.860885 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7kvk\" (UniqueName: \"kubernetes.io/projected/cd3cee91-47c6-4839-862b-d1c3fff0fd6b-kube-api-access-j7kvk\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.211268 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369009 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369379 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369430 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369472 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369488 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369569 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6nk\" (UniqueName: \"kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369636 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs\") pod \"a395b791-8644-48a9-8a21-98242ae82b14\" (UID: \"a395b791-8644-48a9-8a21-98242ae82b14\") " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.369971 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.370196 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs" (OuterVolumeSpecName: "logs") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.373361 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts" (OuterVolumeSpecName: "scripts") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.378557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk" (OuterVolumeSpecName: "kube-api-access-8p6nk") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "kube-api-access-8p6nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.392597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.407734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.431799 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data" (OuterVolumeSpecName: "config-data") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.452756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a395b791-8644-48a9-8a21-98242ae82b14" (UID: "a395b791-8644-48a9-8a21-98242ae82b14"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.471919 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p6nk\" (UniqueName: \"kubernetes.io/projected/a395b791-8644-48a9-8a21-98242ae82b14-kube-api-access-8p6nk\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.471958 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.471989 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.472000 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.472011 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.472021 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a395b791-8644-48a9-8a21-98242ae82b14-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.472031 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a395b791-8644-48a9-8a21-98242ae82b14-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.490525 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.573962 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.776287 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dk645"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.785540 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dk645"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.825692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a395b791-8644-48a9-8a21-98242ae82b14","Type":"ContainerDied","Data":"be29bf8edea0b8cd9378047d1d93d1c5699ec802c3da6f8916259019459af147"} Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.826003 4766 scope.go:117] "RemoveContainer" containerID="461ce70085db96b14a15b462a3d528cfd68d461fe66c27a696d2569531d25c4c" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.826067 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.880546 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-czwdh"] Jan 29 11:46:22 crc kubenswrapper[4766]: E0129 11:46:22.880852 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-log" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.880864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-log" Jan 29 11:46:22 crc kubenswrapper[4766]: E0129 11:46:22.880878 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3cee91-47c6-4839-862b-d1c3fff0fd6b" containerName="keystone-bootstrap" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.880886 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3cee91-47c6-4839-862b-d1c3fff0fd6b" containerName="keystone-bootstrap" Jan 29 11:46:22 crc kubenswrapper[4766]: E0129 11:46:22.880920 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-httpd" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.880926 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-httpd" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.881092 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-log" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.881103 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a395b791-8644-48a9-8a21-98242ae82b14" containerName="glance-httpd" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.881114 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3cee91-47c6-4839-862b-d1c3fff0fd6b" containerName="keystone-bootstrap" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.881761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.886610 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.886641 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.887227 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.887417 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68trd" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.887579 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.895959 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.915711 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.921087 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-czwdh"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.935609 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.937235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.941942 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.942638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.963637 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.980867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.980954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.980998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.981139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.981228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l92qs\" (UniqueName: \"kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:22 crc kubenswrapper[4766]: I0129 11:46:22.981474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.082944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083063 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083170 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083225 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083253 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grnt\" (UniqueName: \"kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083311 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l92qs\" (UniqueName: \"kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083373 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083399 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.083524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.088487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.090477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.090639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.090956 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.091796 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.102723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l92qs\" (UniqueName: \"kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs\") pod \"keystone-bootstrap-czwdh\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grnt\" (UniqueName: \"kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185401 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185440 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185467 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.185746 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.186527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.186624 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.189493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.189734 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.191235 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.192222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.202117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grnt\" (UniqueName: \"kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.210406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.217276 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " pod="openstack/glance-default-external-api-0" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.238058 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a395b791-8644-48a9-8a21-98242ae82b14" path="/var/lib/kubelet/pods/a395b791-8644-48a9-8a21-98242ae82b14/volumes" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.238658 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3cee91-47c6-4839-862b-d1c3fff0fd6b" path="/var/lib/kubelet/pods/cd3cee91-47c6-4839-862b-d1c3fff0fd6b/volumes" Jan 29 11:46:23 crc kubenswrapper[4766]: I0129 11:46:23.263797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:46:28 crc kubenswrapper[4766]: I0129 11:46:28.790897 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.568105 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673215 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673288 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673320 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.673552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv44r\" (UniqueName: \"kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r\") pod \"da092d08-0c97-45e3-8d8a-162c6a00d827\" (UID: \"da092d08-0c97-45e3-8d8a-162c6a00d827\") " Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.678530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r" (OuterVolumeSpecName: "kube-api-access-dv44r") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "kube-api-access-dv44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.725701 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config" (OuterVolumeSpecName: "config") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.728618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.731427 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.748392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.758245 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da092d08-0c97-45e3-8d8a-162c6a00d827" (UID: "da092d08-0c97-45e3-8d8a-162c6a00d827"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776250 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776283 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776293 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv44r\" (UniqueName: \"kubernetes.io/projected/da092d08-0c97-45e3-8d8a-162c6a00d827-kube-api-access-dv44r\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776301 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776312 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.776320 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da092d08-0c97-45e3-8d8a-162c6a00d827-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.914400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" event={"ID":"da092d08-0c97-45e3-8d8a-162c6a00d827","Type":"ContainerDied","Data":"84608bdf814f419748601d04f27acd318bfc5d8fe9b9f669e815f527d99e2612"} Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.914575 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.965302 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:46:31 crc kubenswrapper[4766]: I0129 11:46:31.972184 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-h59dv"] Jan 29 11:46:32 crc kubenswrapper[4766]: I0129 11:46:32.657981 4766 scope.go:117] "RemoveContainer" containerID="fc4f04f3d722dbefac17815a0008454d8272c06cc63c1920d1e7e3aacd16bce6" Jan 29 11:46:32 crc kubenswrapper[4766]: E0129 11:46:32.690219 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 11:46:32 crc kubenswrapper[4766]: E0129 11:46:32.690705 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgstc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wc899_openstack(e2d775c8-398d-45dd-aea7-2c2bc050e040): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:46:32 crc kubenswrapper[4766]: E0129 11:46:32.691904 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wc899" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" Jan 29 11:46:32 crc kubenswrapper[4766]: I0129 11:46:32.870199 4766 scope.go:117] "RemoveContainer" containerID="b701ef44e347feb43c14cfc6e87ee771ec0e9ba2936937403c6f2c8f306e1c2c" Jan 29 11:46:32 crc kubenswrapper[4766]: I0129 11:46:32.904853 4766 scope.go:117] "RemoveContainer" containerID="e66e30a3da739d10e6e576c8ae714d072ec22bacc4d2125da012135b0bb6f3b2" Jan 29 11:46:32 crc kubenswrapper[4766]: E0129 11:46:32.955122 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wc899" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.236277 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" path="/var/lib/kubelet/pods/da092d08-0c97-45e3-8d8a-162c6a00d827/volumes" Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.247395 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:46:33 crc kubenswrapper[4766]: W0129 11:46:33.251093 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b4e558_a02f_4604_91ce_b99c34e061dd.slice/crio-3fde6fc2fa07629373f9c81475ac23674579ddb924a03cdcbd1726563898b176 WatchSource:0}: Error finding container 3fde6fc2fa07629373f9c81475ac23674579ddb924a03cdcbd1726563898b176: Status 404 returned error can't find the container with id 3fde6fc2fa07629373f9c81475ac23674579ddb924a03cdcbd1726563898b176 Jan 29 11:46:33 crc kubenswrapper[4766]: W0129 11:46:33.254033 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8be85b_b686_4a64_ab6a_42122b1a995c.slice/crio-4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9 WatchSource:0}: Error finding container 4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9: Status 404 returned error can't find the container with id 4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9 Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.257743 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-czwdh"] Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.346618 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:46:33 crc kubenswrapper[4766]: W0129 11:46:33.379381 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ffbf34_3350_41d4_ae62_94700d3e40bc.slice/crio-e38b8b0d5a682e48caea6b042f7191cc7fec9bcef78cf338864a60b8a430492e WatchSource:0}: Error finding container e38b8b0d5a682e48caea6b042f7191cc7fec9bcef78cf338864a60b8a430492e: Status 404 returned error can't find the container with id e38b8b0d5a682e48caea6b042f7191cc7fec9bcef78cf338864a60b8a430492e Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.792321 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-h59dv" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.973880 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerStarted","Data":"bac8d4ac39c784ef6ed6f83216dae502912b3dc3e8de386318caa719ac58d78b"} Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.978474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-czwdh" event={"ID":"8e8be85b-b686-4a64-ab6a-42122b1a995c","Type":"ContainerStarted","Data":"a1b01b98a17e9068636d4080c5efde45109be4e9266579fe903c48c395a384f6"} Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.978527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-czwdh" event={"ID":"8e8be85b-b686-4a64-ab6a-42122b1a995c","Type":"ContainerStarted","Data":"4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9"} Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.985245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4bqsv" event={"ID":"cc11389e-2508-468a-b9ec-25acfbde9046","Type":"ContainerStarted","Data":"7a100abcf625aab3d1ab04d02bb7f3a8d947c5eb7f100414cb20bc19c04d918d"} Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.995714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerStarted","Data":"36a7b334ee749d9bdadb767f5fdf15c6eab854818ce783508b3830c80759ff69"} Jan 29 11:46:33 crc kubenswrapper[4766]: I0129 11:46:33.995758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerStarted","Data":"e38b8b0d5a682e48caea6b042f7191cc7fec9bcef78cf338864a60b8a430492e"} Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.001016 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-czwdh" podStartSLOduration=12.001005135 podStartE2EDuration="12.001005135s" podCreationTimestamp="2026-01-29 11:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:33.999913525 +0000 UTC m=+1531.112306536" watchObservedRunningTime="2026-01-29 11:46:34.001005135 +0000 UTC m=+1531.113398146" Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.040711 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tdq67" event={"ID":"db09eba3-8fd8-4448-8e6c-2819328ac301","Type":"ContainerStarted","Data":"480ab67482035d0544f7ef590575a0908518a57e5ecf17c5349eaf4d31105da6"} Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.085519 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tdq67" podStartSLOduration=4.10365498 podStartE2EDuration="26.085480368s" podCreationTimestamp="2026-01-29 11:46:08 +0000 UTC" firstStartedPulling="2026-01-29 11:46:09.49998662 +0000 UTC m=+1506.612379641" lastFinishedPulling="2026-01-29 11:46:31.481812018 +0000 UTC m=+1528.594205029" observedRunningTime="2026-01-29 11:46:34.081764155 +0000 UTC m=+1531.194157176" watchObservedRunningTime="2026-01-29 11:46:34.085480368 +0000 UTC m=+1531.197873379" Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.100214 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4bqsv" podStartSLOduration=3.161645864 podStartE2EDuration="26.100191416s" podCreationTimestamp="2026-01-29 11:46:08 +0000 UTC" firstStartedPulling="2026-01-29 11:46:09.71128993 +0000 UTC m=+1506.823682941" lastFinishedPulling="2026-01-29 11:46:32.649835482 +0000 UTC m=+1529.762228493" observedRunningTime="2026-01-29 11:46:34.034033131 +0000 UTC m=+1531.146426142" watchObservedRunningTime="2026-01-29 11:46:34.100191416 +0000 UTC m=+1531.212584427" Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.166233 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerStarted","Data":"8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3"} Jan 29 11:46:34 crc kubenswrapper[4766]: I0129 11:46:34.166290 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerStarted","Data":"3fde6fc2fa07629373f9c81475ac23674579ddb924a03cdcbd1726563898b176"} Jan 29 11:46:35 crc kubenswrapper[4766]: I0129 11:46:35.176735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerStarted","Data":"5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df"} Jan 29 11:46:35 crc kubenswrapper[4766]: I0129 11:46:35.178865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerStarted","Data":"234cd293c0fd1f05219c6c823a7a1f6d478a64dd1cfb8d4f9c760d4edb64cb35"} Jan 29 11:46:35 crc kubenswrapper[4766]: I0129 11:46:35.204472 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.2044558 podStartE2EDuration="13.2044558s" podCreationTimestamp="2026-01-29 11:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:35.198238148 +0000 UTC m=+1532.310631179" watchObservedRunningTime="2026-01-29 11:46:35.2044558 +0000 UTC m=+1532.316848831" Jan 29 11:46:35 crc kubenswrapper[4766]: I0129 11:46:35.233722 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.233705142 podStartE2EDuration="15.233705142s" podCreationTimestamp="2026-01-29 11:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:35.226311047 +0000 UTC m=+1532.338704068" watchObservedRunningTime="2026-01-29 11:46:35.233705142 +0000 UTC m=+1532.346098153" Jan 29 11:46:36 crc kubenswrapper[4766]: I0129 11:46:36.196704 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerStarted","Data":"f1555c94e60831cbe9a24c08703cdca7c454c3fa69bbc3886a5927a37ce9f330"} Jan 29 11:46:38 crc kubenswrapper[4766]: I0129 11:46:38.223021 4766 generic.go:334] "Generic (PLEG): container finished" podID="8e8be85b-b686-4a64-ab6a-42122b1a995c" containerID="a1b01b98a17e9068636d4080c5efde45109be4e9266579fe903c48c395a384f6" exitCode=0 Jan 29 11:46:38 crc kubenswrapper[4766]: I0129 11:46:38.223110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-czwdh" event={"ID":"8e8be85b-b686-4a64-ab6a-42122b1a995c","Type":"ContainerDied","Data":"a1b01b98a17e9068636d4080c5efde45109be4e9266579fe903c48c395a384f6"} Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.249332 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.251053 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.277145 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.286883 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.832207 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.984827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.984896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l92qs\" (UniqueName: \"kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.985043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.985096 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.985118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.985146 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts\") pod \"8e8be85b-b686-4a64-ab6a-42122b1a995c\" (UID: \"8e8be85b-b686-4a64-ab6a-42122b1a995c\") " Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.990620 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts" (OuterVolumeSpecName: "scripts") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.990648 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:41 crc kubenswrapper[4766]: I0129 11:46:41.991366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs" (OuterVolumeSpecName: "kube-api-access-l92qs") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "kube-api-access-l92qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.004855 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.010492 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data" (OuterVolumeSpecName: "config-data") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.010932 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e8be85b-b686-4a64-ab6a-42122b1a995c" (UID: "8e8be85b-b686-4a64-ab6a-42122b1a995c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.087824 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.088183 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.088299 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.088397 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.088566 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8be85b-b686-4a64-ab6a-42122b1a995c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.088699 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l92qs\" (UniqueName: \"kubernetes.io/projected/8e8be85b-b686-4a64-ab6a-42122b1a995c-kube-api-access-l92qs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.274530 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-czwdh" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.275157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-czwdh" event={"ID":"8e8be85b-b686-4a64-ab6a-42122b1a995c","Type":"ContainerDied","Data":"4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9"} Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.275257 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7addf71a6753b923bdd9c5b7dec09c0e157c61bca6acabd177e88f722999c9" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.275337 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.275672 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.950993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:46:42 crc kubenswrapper[4766]: E0129 11:46:42.951623 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="init" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.951643 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="init" Jan 29 11:46:42 crc kubenswrapper[4766]: E0129 11:46:42.951656 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.951662 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" Jan 29 11:46:42 crc kubenswrapper[4766]: E0129 11:46:42.951679 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8be85b-b686-4a64-ab6a-42122b1a995c" containerName="keystone-bootstrap" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.951685 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8be85b-b686-4a64-ab6a-42122b1a995c" containerName="keystone-bootstrap" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.951865 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="da092d08-0c97-45e3-8d8a-162c6a00d827" containerName="dnsmasq-dns" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.951883 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8be85b-b686-4a64-ab6a-42122b1a995c" containerName="keystone-bootstrap" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.952397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.956581 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.960652 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.960917 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.961051 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.961535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.961800 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68trd" Jan 29 11:46:42 crc kubenswrapper[4766]: I0129 11:46:42.972356 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.106702 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtcg\" (UniqueName: \"kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.106812 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.106892 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.106995 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.107127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.107176 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.107271 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.107316 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.208701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.208794 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.208858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.208894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.209697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.209762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.209849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhtcg\" (UniqueName: \"kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.209914 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.215300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.215546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.217618 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.218020 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.230527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhtcg\" (UniqueName: \"kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.234743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.234906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.235533 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle\") pod \"keystone-6757d49457-dctc6\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.264595 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.264678 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.310827 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.315749 4766 generic.go:334] "Generic (PLEG): container finished" podID="db09eba3-8fd8-4448-8e6c-2819328ac301" containerID="480ab67482035d0544f7ef590575a0908518a57e5ecf17c5349eaf4d31105da6" exitCode=0 Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.315818 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tdq67" event={"ID":"db09eba3-8fd8-4448-8e6c-2819328ac301","Type":"ContainerDied","Data":"480ab67482035d0544f7ef590575a0908518a57e5ecf17c5349eaf4d31105da6"} Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.317834 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.319445 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerStarted","Data":"5fa425b33c0943d95638cb1f22947ab54527f1816e388476e032b3fda7b99d0d"} Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.319607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.325933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:46:43 crc kubenswrapper[4766]: W0129 11:46:43.806689 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0607cc62_49d5_4a25_b4ad_636cae5d1e7e.slice/crio-c6261f71b81e62cd6a6850c7a8333dc90251566642ff031959560a0904f6f6d2 WatchSource:0}: Error finding container c6261f71b81e62cd6a6850c7a8333dc90251566642ff031959560a0904f6f6d2: Status 404 returned error can't find the container with id c6261f71b81e62cd6a6850c7a8333dc90251566642ff031959560a0904f6f6d2 Jan 29 11:46:43 crc kubenswrapper[4766]: I0129 11:46:43.811616 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.344325 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.344655 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.345874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6757d49457-dctc6" event={"ID":"0607cc62-49d5-4a25-b4ad-636cae5d1e7e","Type":"ContainerStarted","Data":"c6261f71b81e62cd6a6850c7a8333dc90251566642ff031959560a0904f6f6d2"} Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.348360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.573525 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.632697 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.696243 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.763880 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle\") pod \"db09eba3-8fd8-4448-8e6c-2819328ac301\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.763987 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm27\" (UniqueName: \"kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27\") pod \"db09eba3-8fd8-4448-8e6c-2819328ac301\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.764039 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts\") pod \"db09eba3-8fd8-4448-8e6c-2819328ac301\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.764070 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data\") pod \"db09eba3-8fd8-4448-8e6c-2819328ac301\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.764133 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs\") pod \"db09eba3-8fd8-4448-8e6c-2819328ac301\" (UID: \"db09eba3-8fd8-4448-8e6c-2819328ac301\") " Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.765108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs" (OuterVolumeSpecName: "logs") pod "db09eba3-8fd8-4448-8e6c-2819328ac301" (UID: "db09eba3-8fd8-4448-8e6c-2819328ac301"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.772955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts" (OuterVolumeSpecName: "scripts") pod "db09eba3-8fd8-4448-8e6c-2819328ac301" (UID: "db09eba3-8fd8-4448-8e6c-2819328ac301"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.779441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27" (OuterVolumeSpecName: "kube-api-access-tzm27") pod "db09eba3-8fd8-4448-8e6c-2819328ac301" (UID: "db09eba3-8fd8-4448-8e6c-2819328ac301"). InnerVolumeSpecName "kube-api-access-tzm27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.863266 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data" (OuterVolumeSpecName: "config-data") pod "db09eba3-8fd8-4448-8e6c-2819328ac301" (UID: "db09eba3-8fd8-4448-8e6c-2819328ac301"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.870029 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzm27\" (UniqueName: \"kubernetes.io/projected/db09eba3-8fd8-4448-8e6c-2819328ac301-kube-api-access-tzm27\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.870079 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.870090 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.870101 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db09eba3-8fd8-4448-8e6c-2819328ac301-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.881894 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db09eba3-8fd8-4448-8e6c-2819328ac301" (UID: "db09eba3-8fd8-4448-8e6c-2819328ac301"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:44 crc kubenswrapper[4766]: I0129 11:46:44.975742 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db09eba3-8fd8-4448-8e6c-2819328ac301-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.354988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6757d49457-dctc6" event={"ID":"0607cc62-49d5-4a25-b4ad-636cae5d1e7e","Type":"ContainerStarted","Data":"9a99c0592d77644bf5b6f77afc5cf7aaa5c3a2e758cf41c91b1d8d6f29b64745"} Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.355087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.356728 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.357536 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tdq67" event={"ID":"db09eba3-8fd8-4448-8e6c-2819328ac301","Type":"ContainerDied","Data":"bda017599b729bf24b57b5e457afcc4209b20199d18e9f6e820de84637f5b1cb"} Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.357575 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda017599b729bf24b57b5e457afcc4209b20199d18e9f6e820de84637f5b1cb" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.357548 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tdq67" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.436730 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6757d49457-dctc6" podStartSLOduration=3.436708318 podStartE2EDuration="3.436708318s" podCreationTimestamp="2026-01-29 11:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:45.374601546 +0000 UTC m=+1542.486994587" watchObservedRunningTime="2026-01-29 11:46:45.436708318 +0000 UTC m=+1542.549101329" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.557709 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:46:45 crc kubenswrapper[4766]: E0129 11:46:45.558186 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db09eba3-8fd8-4448-8e6c-2819328ac301" containerName="placement-db-sync" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.558205 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="db09eba3-8fd8-4448-8e6c-2819328ac301" containerName="placement-db-sync" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.558474 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="db09eba3-8fd8-4448-8e6c-2819328ac301" containerName="placement-db-sync" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.559512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.563614 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.563922 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.564534 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.564937 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.566004 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8j7c7" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.581675 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.586873 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.587181 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.587389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.587531 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.587617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.587902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.588038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcb9s\" (UniqueName: \"kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690538 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcb9s\" (UniqueName: \"kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690716 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690745 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.690772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.692192 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.694962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.696063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.714354 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.714428 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.715696 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.717967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcb9s\" (UniqueName: \"kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s\") pod \"placement-7ff4655576-rzc26\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:45 crc kubenswrapper[4766]: I0129 11:46:45.890903 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:46 crc kubenswrapper[4766]: I0129 11:46:46.021933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:46:46 crc kubenswrapper[4766]: I0129 11:46:46.029835 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:46:46 crc kubenswrapper[4766]: I0129 11:46:46.446060 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:46:46 crc kubenswrapper[4766]: W0129 11:46:46.467335 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8162079c_abe4_4e9c_bdd5_2fbb43187e61.slice/crio-f4e02c5c9ca3232944730e43315de023c249db73a8f920bedf12c540c18cd376 WatchSource:0}: Error finding container f4e02c5c9ca3232944730e43315de023c249db73a8f920bedf12c540c18cd376: Status 404 returned error can't find the container with id f4e02c5c9ca3232944730e43315de023c249db73a8f920bedf12c540c18cd376 Jan 29 11:46:47 crc kubenswrapper[4766]: I0129 11:46:47.384565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerStarted","Data":"6b47eab1e9e54a967ffb6a8dbb5d22f27c753e7cad3329b3e2436f5c3898c7c9"} Jan 29 11:46:47 crc kubenswrapper[4766]: I0129 11:46:47.385253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerStarted","Data":"8b8626b814bdc9ebbe0eb6d6c45744653225b6c9c53cd0a3325216664d30e4d6"} Jan 29 11:46:47 crc kubenswrapper[4766]: I0129 11:46:47.385355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerStarted","Data":"f4e02c5c9ca3232944730e43315de023c249db73a8f920bedf12c540c18cd376"} Jan 29 11:46:47 crc kubenswrapper[4766]: I0129 11:46:47.385488 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:47 crc kubenswrapper[4766]: I0129 11:46:47.385600 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:46:48 crc kubenswrapper[4766]: I0129 11:46:48.246231 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ff4655576-rzc26" podStartSLOduration=3.246210967 podStartE2EDuration="3.246210967s" podCreationTimestamp="2026-01-29 11:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:46:47.414342076 +0000 UTC m=+1544.526735107" watchObservedRunningTime="2026-01-29 11:46:48.246210967 +0000 UTC m=+1545.358603988" Jan 29 11:46:48 crc kubenswrapper[4766]: I0129 11:46:48.398439 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4bqsv" event={"ID":"cc11389e-2508-468a-b9ec-25acfbde9046","Type":"ContainerDied","Data":"7a100abcf625aab3d1ab04d02bb7f3a8d947c5eb7f100414cb20bc19c04d918d"} Jan 29 11:46:48 crc kubenswrapper[4766]: I0129 11:46:48.398368 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc11389e-2508-468a-b9ec-25acfbde9046" containerID="7a100abcf625aab3d1ab04d02bb7f3a8d947c5eb7f100414cb20bc19c04d918d" exitCode=0 Jan 29 11:46:50 crc kubenswrapper[4766]: I0129 11:46:50.420505 4766 generic.go:334] "Generic (PLEG): container finished" podID="16bc3c63-cee9-4f14-82bf-2f912e65cf14" containerID="6178e29f3f97ee21ec4cf7acfdfb1b895e6b1f01bc50ed4550a76d923def4120" exitCode=0 Jan 29 11:46:50 crc kubenswrapper[4766]: I0129 11:46:50.420643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4mnfs" event={"ID":"16bc3c63-cee9-4f14-82bf-2f912e65cf14","Type":"ContainerDied","Data":"6178e29f3f97ee21ec4cf7acfdfb1b895e6b1f01bc50ed4550a76d923def4120"} Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.250398 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.417436 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle\") pod \"cc11389e-2508-468a-b9ec-25acfbde9046\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.417516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data\") pod \"cc11389e-2508-468a-b9ec-25acfbde9046\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.417608 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp8ss\" (UniqueName: \"kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss\") pod \"cc11389e-2508-468a-b9ec-25acfbde9046\" (UID: \"cc11389e-2508-468a-b9ec-25acfbde9046\") " Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.423083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss" (OuterVolumeSpecName: "kube-api-access-tp8ss") pod "cc11389e-2508-468a-b9ec-25acfbde9046" (UID: "cc11389e-2508-468a-b9ec-25acfbde9046"). InnerVolumeSpecName "kube-api-access-tp8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.432071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4bqsv" event={"ID":"cc11389e-2508-468a-b9ec-25acfbde9046","Type":"ContainerDied","Data":"54da3e2b1fa9c3e0ded6eb32952f427687439fc1af3417381a7b8a1bc7bfe49c"} Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.432115 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54da3e2b1fa9c3e0ded6eb32952f427687439fc1af3417381a7b8a1bc7bfe49c" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.432089 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4bqsv" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.432865 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc11389e-2508-468a-b9ec-25acfbde9046" (UID: "cc11389e-2508-468a-b9ec-25acfbde9046"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.446129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc11389e-2508-468a-b9ec-25acfbde9046" (UID: "cc11389e-2508-468a-b9ec-25acfbde9046"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.555140 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.555176 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc11389e-2508-468a-b9ec-25acfbde9046-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:51 crc kubenswrapper[4766]: I0129 11:46:51.555189 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp8ss\" (UniqueName: \"kubernetes.io/projected/cc11389e-2508-468a-b9ec-25acfbde9046-kube-api-access-tp8ss\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.513121 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:46:52 crc kubenswrapper[4766]: E0129 11:46:52.513501 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc11389e-2508-468a-b9ec-25acfbde9046" containerName="barbican-db-sync" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.513514 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc11389e-2508-468a-b9ec-25acfbde9046" containerName="barbican-db-sync" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.513688 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc11389e-2508-468a-b9ec-25acfbde9046" containerName="barbican-db-sync" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.514758 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.519498 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.519910 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6pfwh" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.520162 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.550132 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.574737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.574803 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.574848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhcp\" (UniqueName: \"kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.574946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.574978 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.610532 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.612115 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.618663 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.633228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.655529 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.657387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676286 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676371 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xpr\" (UniqueName: \"kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676440 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjhcp\" (UniqueName: \"kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676477 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676494 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676534 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676575 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xspwt\" (UniqueName: \"kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676656 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676678 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.676728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.678015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.681480 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.688971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.689084 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.699076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjhcp\" (UniqueName: \"kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.706225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data\") pod \"barbican-worker-67f655d9dc-95fxw\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xspwt\" (UniqueName: \"kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.781959 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9xpr\" (UniqueName: \"kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.792996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.799794 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.803224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.804025 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.804068 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.808687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.845340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.846755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.847784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xspwt\" (UniqueName: \"kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt\") pod \"dnsmasq-dns-7c67bffd47-q9947\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.855372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9xpr\" (UniqueName: \"kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.858132 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.907221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom\") pod \"barbican-keystone-listener-65cd6d7bdb-jmsw2\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.919512 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.938845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.951827 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.956025 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:46:52 crc kubenswrapper[4766]: I0129 11:46:52.964574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.072187 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.092203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hfl\" (UniqueName: \"kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.092300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.092337 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.092439 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.092469 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.195018 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.195134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hfl\" (UniqueName: \"kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.195216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.195247 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.196158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.197005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.199646 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.199915 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.200586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.214013 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hfl\" (UniqueName: \"kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl\") pod \"barbican-api-7fb8b49db-d28l6\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:53.270322 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.590578 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.739322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbd2s\" (UniqueName: \"kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s\") pod \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.739451 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config\") pod \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.739562 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle\") pod \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\" (UID: \"16bc3c63-cee9-4f14-82bf-2f912e65cf14\") " Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.770863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s" (OuterVolumeSpecName: "kube-api-access-rbd2s") pod "16bc3c63-cee9-4f14-82bf-2f912e65cf14" (UID: "16bc3c63-cee9-4f14-82bf-2f912e65cf14"). InnerVolumeSpecName "kube-api-access-rbd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.771171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config" (OuterVolumeSpecName: "config") pod "16bc3c63-cee9-4f14-82bf-2f912e65cf14" (UID: "16bc3c63-cee9-4f14-82bf-2f912e65cf14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.793354 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16bc3c63-cee9-4f14-82bf-2f912e65cf14" (UID: "16bc3c63-cee9-4f14-82bf-2f912e65cf14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.847558 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.847586 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbd2s\" (UniqueName: \"kubernetes.io/projected/16bc3c63-cee9-4f14-82bf-2f912e65cf14-kube-api-access-rbd2s\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:55.847598 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/16bc3c63-cee9-4f14-82bf-2f912e65cf14-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.474292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4mnfs" event={"ID":"16bc3c63-cee9-4f14-82bf-2f912e65cf14","Type":"ContainerDied","Data":"e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc"} Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.474603 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e65ff2aee0064554a7e724b8206cdcc85fd55a960798bb15f01347ed2692e0fc" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.474527 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4mnfs" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.508738 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:46:56 crc kubenswrapper[4766]: E0129 11:46:56.509175 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bc3c63-cee9-4f14-82bf-2f912e65cf14" containerName="neutron-db-sync" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.509195 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bc3c63-cee9-4f14-82bf-2f912e65cf14" containerName="neutron-db-sync" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.509462 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="16bc3c63-cee9-4f14-82bf-2f912e65cf14" containerName="neutron-db-sync" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.510560 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.513239 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.513287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.538316 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.588813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bz26\" (UniqueName: \"kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.588919 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.588964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.588985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.589003 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.589056 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.589100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bz26\" (UniqueName: \"kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.691700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.697014 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.697083 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.697041 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.697617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.699916 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.723287 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bz26\" (UniqueName: \"kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26\") pod \"barbican-api-864fcd46f6-bn7r2\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.837689 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.838708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.856202 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.858146 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.877740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.896531 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.896572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.896634 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcdc2\" (UniqueName: \"kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.896671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.896781 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.897026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.941423 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.942853 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.947568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.949307 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.949489 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.949952 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dw4jf" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.965474 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999518 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999651 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999692 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zg7j\" (UniqueName: \"kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999711 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcdc2\" (UniqueName: \"kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:56 crc kubenswrapper[4766]: I0129 11:46:56.999806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.000742 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.001349 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.001925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.003527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.005372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.036251 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcdc2\" (UniqueName: \"kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2\") pod \"dnsmasq-dns-848cf88cfc-6cnjd\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.103075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.103173 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.103220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.103272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.103341 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zg7j\" (UniqueName: \"kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.113907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.115840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.117846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.118550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.122709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zg7j\" (UniqueName: \"kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j\") pod \"neutron-6854899c48-wx94v\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.265864 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.266893 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.378142 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.558161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.573772 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.650490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:46:57 crc kubenswrapper[4766]: I0129 11:46:57.674047 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:46:58 crc kubenswrapper[4766]: W0129 11:46:58.170970 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefcaac53_fcf9_47d7_bc27_3246517249ea.slice/crio-26cf1e6edaa6d7e17547708209437a7c3b691e850d9d551502bed9c9791b2a9e WatchSource:0}: Error finding container 26cf1e6edaa6d7e17547708209437a7c3b691e850d9d551502bed9c9791b2a9e: Status 404 returned error can't find the container with id 26cf1e6edaa6d7e17547708209437a7c3b691e850d9d551502bed9c9791b2a9e Jan 29 11:46:58 crc kubenswrapper[4766]: W0129 11:46:58.171711 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode238ce2e_9a21_43c5_94c2_0a31ab078c79.slice/crio-af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd WatchSource:0}: Error finding container af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd: Status 404 returned error can't find the container with id af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd Jan 29 11:46:58 crc kubenswrapper[4766]: I0129 11:46:58.500563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" event={"ID":"efcaac53-fcf9-47d7-bc27-3246517249ea","Type":"ContainerStarted","Data":"26cf1e6edaa6d7e17547708209437a7c3b691e850d9d551502bed9c9791b2a9e"} Jan 29 11:46:58 crc kubenswrapper[4766]: I0129 11:46:58.501906 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerStarted","Data":"af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd"} Jan 29 11:46:58 crc kubenswrapper[4766]: I0129 11:46:58.503933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerStarted","Data":"729628f99dbe284d0566cd89b7bc6d6668d3c6d4355d51dccc7a5107d775097c"} Jan 29 11:46:58 crc kubenswrapper[4766]: I0129 11:46:58.505628 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerStarted","Data":"2d40d22f92c6bf5e4e4bbdc1538e2654c36c7803cc9e2cfa8fd51a1d59aff90a"} Jan 29 11:46:58 crc kubenswrapper[4766]: I0129 11:46:58.506809 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerStarted","Data":"d051dd9e7b44dfa14e7fbd198237702c487ab53c8592b774a7aa988ab4fa8f2b"} Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.519612 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.540689 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.549283 4766 generic.go:334] "Generic (PLEG): container finished" podID="efcaac53-fcf9-47d7-bc27-3246517249ea" containerID="8dbb71b6ca4784db891c02e59b59677bd986e4b9d22be96b42e3d8a88ee6f03d" exitCode=0 Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.549394 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" event={"ID":"efcaac53-fcf9-47d7-bc27-3246517249ea","Type":"ContainerDied","Data":"8dbb71b6ca4784db891c02e59b59677bd986e4b9d22be96b42e3d8a88ee6f03d"} Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.568356 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-central-agent" containerID="cri-o://bac8d4ac39c784ef6ed6f83216dae502912b3dc3e8de386318caa719ac58d78b" gracePeriod=30 Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.569483 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerStarted","Data":"69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb"} Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.569522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerStarted","Data":"b9f2fdd05cf99e938cb47082363813a173212174744fc293bbec5e80f3b35b4b"} Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.569554 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="proxy-httpd" containerID="cri-o://b9f2fdd05cf99e938cb47082363813a173212174744fc293bbec5e80f3b35b4b" gracePeriod=30 Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.569590 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-notification-agent" containerID="cri-o://f1555c94e60831cbe9a24c08703cdca7c454c3fa69bbc3886a5927a37ce9f330" gracePeriod=30 Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.569651 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="sg-core" containerID="cri-o://5fa425b33c0943d95638cb1f22947ab54527f1816e388476e032b3fda7b99d0d" gracePeriod=30 Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.571880 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.589574 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerStarted","Data":"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a"} Jan 29 11:46:59 crc kubenswrapper[4766]: I0129 11:46:59.630944 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.197086122 podStartE2EDuration="51.630927266s" podCreationTimestamp="2026-01-29 11:46:08 +0000 UTC" firstStartedPulling="2026-01-29 11:46:09.503871867 +0000 UTC m=+1506.616264878" lastFinishedPulling="2026-01-29 11:46:58.937713011 +0000 UTC m=+1556.050106022" observedRunningTime="2026-01-29 11:46:59.611821216 +0000 UTC m=+1556.724214227" watchObservedRunningTime="2026-01-29 11:46:59.630927266 +0000 UTC m=+1556.743320277" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.238970 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301363 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301496 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301651 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301733 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xspwt\" (UniqueName: \"kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.301760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config\") pod \"efcaac53-fcf9-47d7-bc27-3246517249ea\" (UID: \"efcaac53-fcf9-47d7-bc27-3246517249ea\") " Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.306577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt" (OuterVolumeSpecName: "kube-api-access-xspwt") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "kube-api-access-xspwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.324847 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.335944 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.339710 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.347142 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config" (OuterVolumeSpecName: "config") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.356139 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efcaac53-fcf9-47d7-bc27-3246517249ea" (UID: "efcaac53-fcf9-47d7-bc27-3246517249ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.404624 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xspwt\" (UniqueName: \"kubernetes.io/projected/efcaac53-fcf9-47d7-bc27-3246517249ea-kube-api-access-xspwt\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.404830 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.404882 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.404930 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.406184 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.406279 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efcaac53-fcf9-47d7-bc27-3246517249ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.602028 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" event={"ID":"efcaac53-fcf9-47d7-bc27-3246517249ea","Type":"ContainerDied","Data":"26cf1e6edaa6d7e17547708209437a7c3b691e850d9d551502bed9c9791b2a9e"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.602103 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-q9947" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.602829 4766 scope.go:117] "RemoveContainer" containerID="8dbb71b6ca4784db891c02e59b59677bd986e4b9d22be96b42e3d8a88ee6f03d" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.604191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wc899" event={"ID":"e2d775c8-398d-45dd-aea7-2c2bc050e040","Type":"ContainerStarted","Data":"1a5abfa52d446485a846754eb67676b987aa6b3104b0f18b430343686110ea02"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.610497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerStarted","Data":"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.610551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerStarted","Data":"bd1537d0a18ffbf93abbfed297fc88ac8b764a746f51d33016ffc69a4d7c0bc5"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.613886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerStarted","Data":"9982d06a3f9e319a6ac98d0397be8271cb4490d37b4f3f2be7d30bd0f946c97e"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.614248 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.614361 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623806 4766 generic.go:334] "Generic (PLEG): container finished" podID="07b49130-ef93-4a8f-8830-ab2539302987" containerID="b9f2fdd05cf99e938cb47082363813a173212174744fc293bbec5e80f3b35b4b" exitCode=0 Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623836 4766 generic.go:334] "Generic (PLEG): container finished" podID="07b49130-ef93-4a8f-8830-ab2539302987" containerID="5fa425b33c0943d95638cb1f22947ab54527f1816e388476e032b3fda7b99d0d" exitCode=2 Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623843 4766 generic.go:334] "Generic (PLEG): container finished" podID="07b49130-ef93-4a8f-8830-ab2539302987" containerID="bac8d4ac39c784ef6ed6f83216dae502912b3dc3e8de386318caa719ac58d78b" exitCode=0 Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623883 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerDied","Data":"b9f2fdd05cf99e938cb47082363813a173212174744fc293bbec5e80f3b35b4b"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerDied","Data":"5fa425b33c0943d95638cb1f22947ab54527f1816e388476e032b3fda7b99d0d"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.623917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerDied","Data":"bac8d4ac39c784ef6ed6f83216dae502912b3dc3e8de386318caa719ac58d78b"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.626403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerStarted","Data":"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.627145 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.627174 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.631445 4766 generic.go:334] "Generic (PLEG): container finished" podID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerID="b2565533fc97d56cca8b0208c040bb47fdbe135ae4090f3faf34f8876d98061e" exitCode=0 Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.631522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" event={"ID":"4c8b3024-3e34-488a-8cea-ec3ee57fda99","Type":"ContainerDied","Data":"b2565533fc97d56cca8b0208c040bb47fdbe135ae4090f3faf34f8876d98061e"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.631738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" event={"ID":"4c8b3024-3e34-488a-8cea-ec3ee57fda99","Type":"ContainerStarted","Data":"2543df0c60aee6a3f5673e8726128d2ee28292bf13d068f116bb06773f22a7a6"} Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.634447 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wc899" podStartSLOduration=3.911022853 podStartE2EDuration="53.634433157s" podCreationTimestamp="2026-01-29 11:46:07 +0000 UTC" firstStartedPulling="2026-01-29 11:46:09.184616984 +0000 UTC m=+1506.297009995" lastFinishedPulling="2026-01-29 11:46:58.908027288 +0000 UTC m=+1556.020420299" observedRunningTime="2026-01-29 11:47:00.630086577 +0000 UTC m=+1557.742479608" watchObservedRunningTime="2026-01-29 11:47:00.634433157 +0000 UTC m=+1557.746826168" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.653672 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-864fcd46f6-bn7r2" podStartSLOduration=4.65365461 podStartE2EDuration="4.65365461s" podCreationTimestamp="2026-01-29 11:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:00.646395239 +0000 UTC m=+1557.758788270" watchObservedRunningTime="2026-01-29 11:47:00.65365461 +0000 UTC m=+1557.766047621" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.665511 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fb8b49db-d28l6" podStartSLOduration=8.665487749 podStartE2EDuration="8.665487749s" podCreationTimestamp="2026-01-29 11:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:00.663960286 +0000 UTC m=+1557.776353307" watchObservedRunningTime="2026-01-29 11:47:00.665487749 +0000 UTC m=+1557.777880760" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.728027 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.737455 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-q9947"] Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.883041 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:47:00 crc kubenswrapper[4766]: E0129 11:47:00.883494 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcaac53-fcf9-47d7-bc27-3246517249ea" containerName="init" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.883510 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcaac53-fcf9-47d7-bc27-3246517249ea" containerName="init" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.883746 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcaac53-fcf9-47d7-bc27-3246517249ea" containerName="init" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.885438 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.894140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.894293 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 11:47:00 crc kubenswrapper[4766]: I0129 11:47:00.979969 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6t2f\" (UniqueName: \"kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025655 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.025758 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127184 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6t2f\" (UniqueName: \"kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.127435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.133097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.136852 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.137715 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.141024 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.141594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.144597 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6t2f\" (UniqueName: \"kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.146447 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle\") pod \"neutron-8d49f9cb5-5nhnk\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.245034 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcaac53-fcf9-47d7-bc27-3246517249ea" path="/var/lib/kubelet/pods/efcaac53-fcf9-47d7-bc27-3246517249ea/volumes" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.255658 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.643029 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerStarted","Data":"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9"} Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.643156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.647010 4766 generic.go:334] "Generic (PLEG): container finished" podID="07b49130-ef93-4a8f-8830-ab2539302987" containerID="f1555c94e60831cbe9a24c08703cdca7c454c3fa69bbc3886a5927a37ce9f330" exitCode=0 Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.647089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerDied","Data":"f1555c94e60831cbe9a24c08703cdca7c454c3fa69bbc3886a5927a37ce9f330"} Jan 29 11:47:01 crc kubenswrapper[4766]: I0129 11:47:01.672686 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6854899c48-wx94v" podStartSLOduration=5.672668552 podStartE2EDuration="5.672668552s" podCreationTimestamp="2026-01-29 11:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:01.67187798 +0000 UTC m=+1558.784270991" watchObservedRunningTime="2026-01-29 11:47:01.672668552 +0000 UTC m=+1558.785061563" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.194141 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.357360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5wwp\" (UniqueName: \"kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.357725 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.357818 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.357925 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.358015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.358048 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.358923 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.358950 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.358995 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle\") pod \"07b49130-ef93-4a8f-8830-ab2539302987\" (UID: \"07b49130-ef93-4a8f-8830-ab2539302987\") " Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.360057 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.360086 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b49130-ef93-4a8f-8830-ab2539302987-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.363676 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp" (OuterVolumeSpecName: "kube-api-access-z5wwp") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "kube-api-access-z5wwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.372609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts" (OuterVolumeSpecName: "scripts") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.392315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:47:02 crc kubenswrapper[4766]: W0129 11:47:02.394679 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd5d6aa7_be8d_4439_a4d3_70272705cc2f.slice/crio-4bd51f8fc6cbb5c97ab7a620778c60102dbdb8107a6b5356f9f3716bb900c2a3 WatchSource:0}: Error finding container 4bd51f8fc6cbb5c97ab7a620778c60102dbdb8107a6b5356f9f3716bb900c2a3: Status 404 returned error can't find the container with id 4bd51f8fc6cbb5c97ab7a620778c60102dbdb8107a6b5356f9f3716bb900c2a3 Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.398093 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.462033 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.462067 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5wwp\" (UniqueName: \"kubernetes.io/projected/07b49130-ef93-4a8f-8830-ab2539302987-kube-api-access-z5wwp\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.462079 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.462509 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.507226 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data" (OuterVolumeSpecName: "config-data") pod "07b49130-ef93-4a8f-8830-ab2539302987" (UID: "07b49130-ef93-4a8f-8830-ab2539302987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.563489 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.563514 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b49130-ef93-4a8f-8830-ab2539302987-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.662102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerStarted","Data":"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.662472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerStarted","Data":"4bd51f8fc6cbb5c97ab7a620778c60102dbdb8107a6b5356f9f3716bb900c2a3"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.664069 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerStarted","Data":"679c7206ac2f82b82e8b1a3ca3a64bf5f1d0710a5dba85f183e20c4390695423"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.665220 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerStarted","Data":"d3a5a4ab1f26a3b0ec0c993790441804f0c92c85eb73ffb26bede23ff956c81f"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.668790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07b49130-ef93-4a8f-8830-ab2539302987","Type":"ContainerDied","Data":"ee1cb9190491ff520aab9329ae373712edbf3b4772b5cec43c5cf1188f7ed0c5"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.668840 4766 scope.go:117] "RemoveContainer" containerID="b9f2fdd05cf99e938cb47082363813a173212174744fc293bbec5e80f3b35b4b" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.668947 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.679885 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" event={"ID":"4c8b3024-3e34-488a-8cea-ec3ee57fda99","Type":"ContainerStarted","Data":"ecbdaa68778e5b026bb15de99d381a247817155af659c96860c67bd842555592"} Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.679933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.710946 4766 scope.go:117] "RemoveContainer" containerID="5fa425b33c0943d95638cb1f22947ab54527f1816e388476e032b3fda7b99d0d" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.739555 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" podStartSLOduration=6.73953348 podStartE2EDuration="6.73953348s" podCreationTimestamp="2026-01-29 11:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:02.710521165 +0000 UTC m=+1559.822914196" watchObservedRunningTime="2026-01-29 11:47:02.73953348 +0000 UTC m=+1559.851926501" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.742723 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.751111 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.764987 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:02 crc kubenswrapper[4766]: E0129 11:47:02.767191 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-notification-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767277 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-notification-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: E0129 11:47:02.767330 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-central-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767374 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-central-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: E0129 11:47:02.767476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="proxy-httpd" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="proxy-httpd" Jan 29 11:47:02 crc kubenswrapper[4766]: E0129 11:47:02.767591 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="sg-core" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767654 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="sg-core" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767915 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="sg-core" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.767979 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="proxy-httpd" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.768049 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-notification-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.768102 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b49130-ef93-4a8f-8830-ab2539302987" containerName="ceilometer-central-agent" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.769860 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.774274 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.774797 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.785704 4766 scope.go:117] "RemoveContainer" containerID="f1555c94e60831cbe9a24c08703cdca7c454c3fa69bbc3886a5927a37ce9f330" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.804902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.856990 4766 scope.go:117] "RemoveContainer" containerID="bac8d4ac39c784ef6ed6f83216dae502912b3dc3e8de386318caa719ac58d78b" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.867735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.867790 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.867816 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.867837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.867856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.868118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.868168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slm47\" (UniqueName: \"kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.969908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.969974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slm47\" (UniqueName: \"kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.970061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.970104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.970134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.970161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.970187 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.971443 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.972015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.975765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.976441 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.977341 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.977529 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:02 crc kubenswrapper[4766]: I0129 11:47:02.993259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slm47\" (UniqueName: \"kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47\") pod \"ceilometer-0\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " pod="openstack/ceilometer-0" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.102452 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.242083 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b49130-ef93-4a8f-8830-ab2539302987" path="/var/lib/kubelet/pods/07b49130-ef93-4a8f-8830-ab2539302987/volumes" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.545205 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.689565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerStarted","Data":"bd0bd2bd35b4da5af9db66a954d6f6dced9e75b974f30758607f551e695f9337"} Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.694622 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerStarted","Data":"a1a9a79ccf506d864099d855f636208585fac69df49e6476e65b408773389289"} Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.709059 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerStarted","Data":"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496"} Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.709429 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.715371 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" podStartSLOduration=8.097581308 podStartE2EDuration="11.715353532s" podCreationTimestamp="2026-01-29 11:46:52 +0000 UTC" firstStartedPulling="2026-01-29 11:46:58.183480654 +0000 UTC m=+1555.295873675" lastFinishedPulling="2026-01-29 11:47:01.801252888 +0000 UTC m=+1558.913645899" observedRunningTime="2026-01-29 11:47:03.711904357 +0000 UTC m=+1560.824297378" watchObservedRunningTime="2026-01-29 11:47:03.715353532 +0000 UTC m=+1560.827746533" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.716186 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerStarted","Data":"a5c49449e84d148200b6f0a47a8ec23b2f77e9135152810c5d0bbabc622713e8"} Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.742146 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8d49f9cb5-5nhnk" podStartSLOduration=3.7421259449999997 podStartE2EDuration="3.742125945s" podCreationTimestamp="2026-01-29 11:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:03.73691495 +0000 UTC m=+1560.849307961" watchObservedRunningTime="2026-01-29 11:47:03.742125945 +0000 UTC m=+1560.854518956" Jan 29 11:47:03 crc kubenswrapper[4766]: I0129 11:47:03.762115 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-67f655d9dc-95fxw" podStartSLOduration=8.134690208 podStartE2EDuration="11.762094889s" podCreationTimestamp="2026-01-29 11:46:52 +0000 UTC" firstStartedPulling="2026-01-29 11:46:58.174654919 +0000 UTC m=+1555.287047930" lastFinishedPulling="2026-01-29 11:47:01.8020596 +0000 UTC m=+1558.914452611" observedRunningTime="2026-01-29 11:47:03.755691731 +0000 UTC m=+1560.868084752" watchObservedRunningTime="2026-01-29 11:47:03.762094889 +0000 UTC m=+1560.874487900" Jan 29 11:47:04 crc kubenswrapper[4766]: I0129 11:47:04.727614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerStarted","Data":"ac1cc8319719cbef03c5a82f3a7ef72fb4e425fac3df50f060722739c6183ff7"} Jan 29 11:47:05 crc kubenswrapper[4766]: I0129 11:47:05.736287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerStarted","Data":"644a1eed80c6bd834a2d1d821fe616dd82c719bf8607d75bf46fe7d75bdf3811"} Jan 29 11:47:05 crc kubenswrapper[4766]: I0129 11:47:05.739093 4766 generic.go:334] "Generic (PLEG): container finished" podID="e2d775c8-398d-45dd-aea7-2c2bc050e040" containerID="1a5abfa52d446485a846754eb67676b987aa6b3104b0f18b430343686110ea02" exitCode=0 Jan 29 11:47:05 crc kubenswrapper[4766]: I0129 11:47:05.739134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wc899" event={"ID":"e2d775c8-398d-45dd-aea7-2c2bc050e040","Type":"ContainerDied","Data":"1a5abfa52d446485a846754eb67676b987aa6b3104b0f18b430343686110ea02"} Jan 29 11:47:06 crc kubenswrapper[4766]: I0129 11:47:06.753435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerStarted","Data":"719ed8a40266897095fd4aac44082047f26bfc965df4c832be6919779bd55106"} Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.218588 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wc899" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265367 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgstc\" (UniqueName: \"kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265601 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265638 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.265696 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id\") pod \"e2d775c8-398d-45dd-aea7-2c2bc050e040\" (UID: \"e2d775c8-398d-45dd-aea7-2c2bc050e040\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.266106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.271856 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.274506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts" (OuterVolumeSpecName: "scripts") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.279546 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.289013 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc" (OuterVolumeSpecName: "kube-api-access-fgstc") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "kube-api-access-fgstc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.317179 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.356202 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.356526 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="dnsmasq-dns" containerID="cri-o://cbecd2dd6f5c9fffbc21fc983d684baddd2a90e8023527a627160d7c5f7ee6e2" gracePeriod=10 Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.367945 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgstc\" (UniqueName: \"kubernetes.io/projected/e2d775c8-398d-45dd-aea7-2c2bc050e040-kube-api-access-fgstc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.367977 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.367990 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.368002 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.368014 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d775c8-398d-45dd-aea7-2c2bc050e040-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.487521 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data" (OuterVolumeSpecName: "config-data") pod "e2d775c8-398d-45dd-aea7-2c2bc050e040" (UID: "e2d775c8-398d-45dd-aea7-2c2bc050e040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.573738 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d775c8-398d-45dd-aea7-2c2bc050e040-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.771641 4766 generic.go:334] "Generic (PLEG): container finished" podID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerID="cbecd2dd6f5c9fffbc21fc983d684baddd2a90e8023527a627160d7c5f7ee6e2" exitCode=0 Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.772170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" event={"ID":"e21d6c49-ac47-47dc-9515-2ff0e5e04f31","Type":"ContainerDied","Data":"cbecd2dd6f5c9fffbc21fc983d684baddd2a90e8023527a627160d7c5f7ee6e2"} Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.799252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerStarted","Data":"58d98541d32a0de2879b147c35e937a70f935dff22781671baa8a4bfe2955a39"} Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.800960 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.803168 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wc899" event={"ID":"e2d775c8-398d-45dd-aea7-2c2bc050e040","Type":"ContainerDied","Data":"3fe5b611a3c0a15393a1c08ec858871335b27854a53e378458f0176bbfbc3cae"} Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.803218 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fe5b611a3c0a15393a1c08ec858871335b27854a53e378458f0176bbfbc3cae" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.803294 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wc899" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.876135 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.912750 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.035763228 podStartE2EDuration="5.912727201s" podCreationTimestamp="2026-01-29 11:47:02 +0000 UTC" firstStartedPulling="2026-01-29 11:47:03.566640308 +0000 UTC m=+1560.679033319" lastFinishedPulling="2026-01-29 11:47:07.443604281 +0000 UTC m=+1564.555997292" observedRunningTime="2026-01-29 11:47:07.845465446 +0000 UTC m=+1564.957858457" watchObservedRunningTime="2026-01-29 11:47:07.912727201 +0000 UTC m=+1565.025120222" Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.997468 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:07 crc kubenswrapper[4766]: I0129 11:47:07.997597 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.000490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.000607 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhq4n\" (UniqueName: \"kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.000676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.000749 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc\") pod \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\" (UID: \"e21d6c49-ac47-47dc-9515-2ff0e5e04f31\") " Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.035309 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n" (OuterVolumeSpecName: "kube-api-access-zhq4n") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "kube-api-access-zhq4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.102196 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:08 crc kubenswrapper[4766]: E0129 11:47:08.106909 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" containerName="cinder-db-sync" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.107499 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" containerName="cinder-db-sync" Jan 29 11:47:08 crc kubenswrapper[4766]: E0129 11:47:08.107611 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="dnsmasq-dns" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.107700 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="dnsmasq-dns" Jan 29 11:47:08 crc kubenswrapper[4766]: E0129 11:47:08.107794 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="init" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.107861 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="init" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.108109 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" containerName="cinder-db-sync" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.108325 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" containerName="dnsmasq-dns" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.111686 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.103609 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhq4n\" (UniqueName: \"kubernetes.io/projected/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-kube-api-access-zhq4n\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.122930 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.125957 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.126564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cd22x" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.126800 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.142037 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.144379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.167890 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.197597 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.202491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230449 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230528 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsz8\" (UniqueName: \"kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw62d\" (UniqueName: \"kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230603 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230644 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230788 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230828 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230859 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230914 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.230929 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.231034 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.234976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config" (OuterVolumeSpecName: "config") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.258818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.297101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e21d6c49-ac47-47dc-9515-2ff0e5e04f31" (UID: "e21d6c49-ac47-47dc-9515-2ff0e5e04f31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.336756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw62d\" (UniqueName: \"kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.336810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.336854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.336908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.336930 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337098 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbsz8\" (UniqueName: \"kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337247 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337267 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337280 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e21d6c49-ac47-47dc-9515-2ff0e5e04f31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.337804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.338645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.339008 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.339877 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.340369 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.340385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.342133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.344755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.347690 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.349367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.386098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbsz8\" (UniqueName: \"kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8\") pod \"cinder-scheduler-0\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.391282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw62d\" (UniqueName: \"kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d\") pod \"dnsmasq-dns-6578955fd5-jsqcd\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.405990 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.433661 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.438368 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.449376 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.456477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.485351 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.547094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.547316 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.547573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.548124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.548209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwp6b\" (UniqueName: \"kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.548279 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.548347 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwp6b\" (UniqueName: \"kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653573 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.653700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.654051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.654440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.668029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.668557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.687193 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.688202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.700348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwp6b\" (UniqueName: \"kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b\") pod \"cinder-api-0\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.767622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.857711 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.858114 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hldhd" event={"ID":"e21d6c49-ac47-47dc-9515-2ff0e5e04f31","Type":"ContainerDied","Data":"ec91be07f9497483e1a1392af1be16acea81aec90c16b0a4a2871daa36c65672"} Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.858155 4766 scope.go:117] "RemoveContainer" containerID="cbecd2dd6f5c9fffbc21fc983d684baddd2a90e8023527a627160d7c5f7ee6e2" Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.918070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.936888 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hldhd"] Jan 29 11:47:08 crc kubenswrapper[4766]: I0129 11:47:08.940620 4766 scope.go:117] "RemoveContainer" containerID="03a3dcb42c888e6846b8f5d199f135ffb445f08c9ab827f2c9921b65405ef94a" Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.027448 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:47:09 crc kubenswrapper[4766]: W0129 11:47:09.046635 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9037dd54_3cca_491b_9f1d_27393d6ec544.slice/crio-e5bdd1a5cbdc60529c9edf4762523068a5b2f3d68cab4cb2e6c4146c98f2554e WatchSource:0}: Error finding container e5bdd1a5cbdc60529c9edf4762523068a5b2f3d68cab4cb2e6c4146c98f2554e: Status 404 returned error can't find the container with id e5bdd1a5cbdc60529c9edf4762523068a5b2f3d68cab4cb2e6c4146c98f2554e Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.241184 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21d6c49-ac47-47dc-9515-2ff0e5e04f31" path="/var/lib/kubelet/pods/e21d6c49-ac47-47dc-9515-2ff0e5e04f31/volumes" Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.250476 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.338205 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:09 crc kubenswrapper[4766]: W0129 11:47:09.354527 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1bc96e3_8168_4c83_a4b0_89238efe2b16.slice/crio-218c2e2628575f8f6fa3cd0b80cfd9deaf5bf9fc850512e725430975b7729bf7 WatchSource:0}: Error finding container 218c2e2628575f8f6fa3cd0b80cfd9deaf5bf9fc850512e725430975b7729bf7: Status 404 returned error can't find the container with id 218c2e2628575f8f6fa3cd0b80cfd9deaf5bf9fc850512e725430975b7729bf7 Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.871209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerStarted","Data":"a1c16d19373476f00e7375be0da993d0b9b1954159e12d2ff30a03783d1e222b"} Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.882034 4766 generic.go:334] "Generic (PLEG): container finished" podID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerID="be0ad3836c6733b6b9f926a905818bde7dd60a66fcafdbaf73d9608859ca9817" exitCode=0 Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.882458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" event={"ID":"9037dd54-3cca-491b-9f1d-27393d6ec544","Type":"ContainerDied","Data":"be0ad3836c6733b6b9f926a905818bde7dd60a66fcafdbaf73d9608859ca9817"} Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.882529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" event={"ID":"9037dd54-3cca-491b-9f1d-27393d6ec544","Type":"ContainerStarted","Data":"e5bdd1a5cbdc60529c9edf4762523068a5b2f3d68cab4cb2e6c4146c98f2554e"} Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.888148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerStarted","Data":"218c2e2628575f8f6fa3cd0b80cfd9deaf5bf9fc850512e725430975b7729bf7"} Jan 29 11:47:09 crc kubenswrapper[4766]: I0129 11:47:09.922161 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.442822 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.523096 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.523375 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" containerID="cri-o://6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a" gracePeriod=30 Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.523881 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" containerID="cri-o://65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e" gracePeriod=30 Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.543710 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": EOF" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.543753 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": EOF" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.543821 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": EOF" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.543710 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": EOF" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.938920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" event={"ID":"9037dd54-3cca-491b-9f1d-27393d6ec544","Type":"ContainerStarted","Data":"15289de76d9cc3802c9a420d29af49465b6a7477dbc7383379a8dccdbe753045"} Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.940251 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.946685 4766 generic.go:334] "Generic (PLEG): container finished" podID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerID="6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a" exitCode=143 Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.946759 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerDied","Data":"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a"} Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.950140 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerStarted","Data":"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780"} Jan 29 11:47:10 crc kubenswrapper[4766]: I0129 11:47:10.978685 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" podStartSLOduration=2.978662121 podStartE2EDuration="2.978662121s" podCreationTimestamp="2026-01-29 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:10.961471654 +0000 UTC m=+1568.073864695" watchObservedRunningTime="2026-01-29 11:47:10.978662121 +0000 UTC m=+1568.091055132" Jan 29 11:47:11 crc kubenswrapper[4766]: I0129 11:47:11.840648 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:11 crc kubenswrapper[4766]: I0129 11:47:11.965193 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerStarted","Data":"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226"} Jan 29 11:47:11 crc kubenswrapper[4766]: I0129 11:47:11.965582 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:47:11 crc kubenswrapper[4766]: I0129 11:47:11.969723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerStarted","Data":"db668750241607632455ecc4015916149a6a26fb010ad04cf265aaeb0ebb4649"} Jan 29 11:47:12 crc kubenswrapper[4766]: I0129 11:47:12.000037 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.000017037 podStartE2EDuration="4.000017037s" podCreationTimestamp="2026-01-29 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:11.997909819 +0000 UTC m=+1569.110302860" watchObservedRunningTime="2026-01-29 11:47:12.000017037 +0000 UTC m=+1569.112410058" Jan 29 11:47:12 crc kubenswrapper[4766]: I0129 11:47:12.978250 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerStarted","Data":"64d83de583bde671fb785f7752fa4aa30c03acb174e1dee3aaeafc4d860250c2"} Jan 29 11:47:12 crc kubenswrapper[4766]: I0129 11:47:12.978388 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api-log" containerID="cri-o://c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" gracePeriod=30 Jan 29 11:47:12 crc kubenswrapper[4766]: I0129 11:47:12.978486 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api" containerID="cri-o://2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" gracePeriod=30 Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.013275 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.052978637 podStartE2EDuration="5.013222567s" podCreationTimestamp="2026-01-29 11:47:08 +0000 UTC" firstStartedPulling="2026-01-29 11:47:09.263681219 +0000 UTC m=+1566.376074230" lastFinishedPulling="2026-01-29 11:47:10.223925149 +0000 UTC m=+1567.336318160" observedRunningTime="2026-01-29 11:47:13.001261646 +0000 UTC m=+1570.113654667" watchObservedRunningTime="2026-01-29 11:47:13.013222567 +0000 UTC m=+1570.125615578" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.407207 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.606458 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690194 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690334 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwp6b\" (UniqueName: \"kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.690504 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data\") pod \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\" (UID: \"e1bc96e3-8168-4c83-a4b0-89238efe2b16\") " Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.691126 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1bc96e3-8168-4c83-a4b0-89238efe2b16-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.691469 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs" (OuterVolumeSpecName: "logs") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.696287 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.709560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts" (OuterVolumeSpecName: "scripts") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.711635 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b" (OuterVolumeSpecName: "kube-api-access-lwp6b") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "kube-api-access-lwp6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.742759 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data" (OuterVolumeSpecName: "config-data") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.756422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1bc96e3-8168-4c83-a4b0-89238efe2b16" (UID: "e1bc96e3-8168-4c83-a4b0-89238efe2b16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793011 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793048 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwp6b\" (UniqueName: \"kubernetes.io/projected/e1bc96e3-8168-4c83-a4b0-89238efe2b16-kube-api-access-lwp6b\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793058 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bc96e3-8168-4c83-a4b0-89238efe2b16-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793067 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793079 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.793087 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1bc96e3-8168-4c83-a4b0-89238efe2b16-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.987101 4766 generic.go:334] "Generic (PLEG): container finished" podID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerID="2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" exitCode=0 Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.987141 4766 generic.go:334] "Generic (PLEG): container finished" podID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerID="c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" exitCode=143 Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.988082 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.997103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerDied","Data":"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226"} Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.997151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerDied","Data":"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780"} Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.997161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1bc96e3-8168-4c83-a4b0-89238efe2b16","Type":"ContainerDied","Data":"218c2e2628575f8f6fa3cd0b80cfd9deaf5bf9fc850512e725430975b7729bf7"} Jan 29 11:47:13 crc kubenswrapper[4766]: I0129 11:47:13.997176 4766 scope.go:117] "RemoveContainer" containerID="2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.031139 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.046636 4766 scope.go:117] "RemoveContainer" containerID="c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.054916 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.071627 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:14 crc kubenswrapper[4766]: E0129 11:47:14.072055 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api-log" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.072077 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api-log" Jan 29 11:47:14 crc kubenswrapper[4766]: E0129 11:47:14.072090 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.072098 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.072315 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.072335 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" containerName="cinder-api-log" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.073254 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.075817 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.076049 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.076179 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.087814 4766 scope.go:117] "RemoveContainer" containerID="2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.088609 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:14 crc kubenswrapper[4766]: E0129 11:47:14.088697 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226\": container with ID starting with 2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226 not found: ID does not exist" containerID="2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.088731 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226"} err="failed to get container status \"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226\": rpc error: code = NotFound desc = could not find container \"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226\": container with ID starting with 2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226 not found: ID does not exist" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.088759 4766 scope.go:117] "RemoveContainer" containerID="c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" Jan 29 11:47:14 crc kubenswrapper[4766]: E0129 11:47:14.091022 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780\": container with ID starting with c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780 not found: ID does not exist" containerID="c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.091087 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780"} err="failed to get container status \"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780\": rpc error: code = NotFound desc = could not find container \"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780\": container with ID starting with c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780 not found: ID does not exist" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.091121 4766 scope.go:117] "RemoveContainer" containerID="2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.091958 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226"} err="failed to get container status \"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226\": rpc error: code = NotFound desc = could not find container \"2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226\": container with ID starting with 2a3fa1f70f49583a8c320fa32219b90cb77ba6270c253ae237f628a0b9ddf226 not found: ID does not exist" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.092007 4766 scope.go:117] "RemoveContainer" containerID="c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.092685 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780"} err="failed to get container status \"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780\": rpc error: code = NotFound desc = could not find container \"c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780\": container with ID starting with c1047f5eb7a840c19fedc9f85eae7ce080d491befaa99c5f78ded819b5ae0780 not found: ID does not exist" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101174 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101239 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101278 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101333 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101362 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101441 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8zpw\" (UniqueName: \"kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.101728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203265 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8zpw\" (UniqueName: \"kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203561 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.203975 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.204977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.205094 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.208965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.209122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.209543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.210015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.210102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.210611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.219643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8zpw\" (UniqueName: \"kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw\") pod \"cinder-api-0\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.387930 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:47:14 crc kubenswrapper[4766]: W0129 11:47:14.836027 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0c26286_7e5f_4610_967b_408ad3916918.slice/crio-e496e4f74d98dd549bc03610d7e5f96b0a9e33f9406ec5d5da1f9e47db52ae8e WatchSource:0}: Error finding container e496e4f74d98dd549bc03610d7e5f96b0a9e33f9406ec5d5da1f9e47db52ae8e: Status 404 returned error can't find the container with id e496e4f74d98dd549bc03610d7e5f96b0a9e33f9406ec5d5da1f9e47db52ae8e Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.837898 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.967761 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": read tcp 10.217.0.2:54484->10.217.0.150:9311: read: connection reset by peer" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.967790 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": read tcp 10.217.0.2:54474->10.217.0.150:9311: read: connection reset by peer" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.968114 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": dial tcp 10.217.0.150:9311: connect: connection refused" Jan 29 11:47:14 crc kubenswrapper[4766]: I0129 11:47:14.968328 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb8b49db-d28l6" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": dial tcp 10.217.0.150:9311: connect: connection refused" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.004605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerStarted","Data":"e496e4f74d98dd549bc03610d7e5f96b0a9e33f9406ec5d5da1f9e47db52ae8e"} Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.076871 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.241438 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1bc96e3-8168-4c83-a4b0-89238efe2b16" path="/var/lib/kubelet/pods/e1bc96e3-8168-4c83-a4b0-89238efe2b16/volumes" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.471112 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.529964 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55hfl\" (UniqueName: \"kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl\") pod \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.530025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data\") pod \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.530047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom\") pod \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.530204 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle\") pod \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.530223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs\") pod \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\" (UID: \"ca25e412-9c10-45d3-84b4-a8f059ddcfbc\") " Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.530970 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs" (OuterVolumeSpecName: "logs") pod "ca25e412-9c10-45d3-84b4-a8f059ddcfbc" (UID: "ca25e412-9c10-45d3-84b4-a8f059ddcfbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.534550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl" (OuterVolumeSpecName: "kube-api-access-55hfl") pod "ca25e412-9c10-45d3-84b4-a8f059ddcfbc" (UID: "ca25e412-9c10-45d3-84b4-a8f059ddcfbc"). InnerVolumeSpecName "kube-api-access-55hfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.537935 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca25e412-9c10-45d3-84b4-a8f059ddcfbc" (UID: "ca25e412-9c10-45d3-84b4-a8f059ddcfbc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.556934 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca25e412-9c10-45d3-84b4-a8f059ddcfbc" (UID: "ca25e412-9c10-45d3-84b4-a8f059ddcfbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.588700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data" (OuterVolumeSpecName: "config-data") pod "ca25e412-9c10-45d3-84b4-a8f059ddcfbc" (UID: "ca25e412-9c10-45d3-84b4-a8f059ddcfbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.632642 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55hfl\" (UniqueName: \"kubernetes.io/projected/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-kube-api-access-55hfl\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.632689 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.632705 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.632716 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:15 crc kubenswrapper[4766]: I0129 11:47:15.632730 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca25e412-9c10-45d3-84b4-a8f059ddcfbc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.022259 4766 generic.go:334] "Generic (PLEG): container finished" podID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerID="65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e" exitCode=0 Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.022360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerDied","Data":"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e"} Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.022572 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb8b49db-d28l6" event={"ID":"ca25e412-9c10-45d3-84b4-a8f059ddcfbc","Type":"ContainerDied","Data":"d051dd9e7b44dfa14e7fbd198237702c487ab53c8592b774a7aa988ab4fa8f2b"} Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.022592 4766 scope.go:117] "RemoveContainer" containerID="65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.022377 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb8b49db-d28l6" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.024812 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerStarted","Data":"a3a9fbf48c090c048092e1c49334325b7802b39586ef26e6e58e4213960da8d3"} Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.057715 4766 scope.go:117] "RemoveContainer" containerID="6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.073485 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.080551 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7fb8b49db-d28l6"] Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.087825 4766 scope.go:117] "RemoveContainer" containerID="65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e" Jan 29 11:47:16 crc kubenswrapper[4766]: E0129 11:47:16.088344 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e\": container with ID starting with 65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e not found: ID does not exist" containerID="65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.088378 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e"} err="failed to get container status \"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e\": rpc error: code = NotFound desc = could not find container \"65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e\": container with ID starting with 65409ade89ea8631310b9303ff5f40d6637a80413e8f22e6d6c33acc1695a06e not found: ID does not exist" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.088397 4766 scope.go:117] "RemoveContainer" containerID="6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a" Jan 29 11:47:16 crc kubenswrapper[4766]: E0129 11:47:16.088758 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a\": container with ID starting with 6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a not found: ID does not exist" containerID="6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.088808 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a"} err="failed to get container status \"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a\": rpc error: code = NotFound desc = could not find container \"6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a\": container with ID starting with 6a4e17c0cd9abeab7ce0e8c3bda8defe3726697a183c6da899ffcc8fde44193a not found: ID does not exist" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.206252 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 11:47:16 crc kubenswrapper[4766]: E0129 11:47:16.206872 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.206898 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" Jan 29 11:47:16 crc kubenswrapper[4766]: E0129 11:47:16.206915 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.206923 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.207126 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.207150 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" containerName="barbican-api-log" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.207917 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.211863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6rp72" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.212089 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.212240 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.240462 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.242661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.242735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9wjv\" (UniqueName: \"kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.242857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.242937 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.344314 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.344394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9wjv\" (UniqueName: \"kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.344435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.344485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.345459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.350670 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.354521 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.361106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9wjv\" (UniqueName: \"kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv\") pod \"openstackclient\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.361546 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.361610 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.541860 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:47:16 crc kubenswrapper[4766]: I0129 11:47:16.986962 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.039790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f484f11d-a20d-4d69-9619-d5f8df022bd7","Type":"ContainerStarted","Data":"d214944c807f88661b78f62d7a0a02d995164e7564104ed84ab1bc2783a57885"} Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.046717 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerStarted","Data":"952afb9816e99acbe37c8a9ddc03d82aee8becf7ea80015a22c126ca32f58ff9"} Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.047955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.057377 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.071905 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.071884959 podStartE2EDuration="3.071884959s" podCreationTimestamp="2026-01-29 11:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:17.066883481 +0000 UTC m=+1574.179276492" watchObservedRunningTime="2026-01-29 11:47:17.071884959 +0000 UTC m=+1574.184277980" Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.090562 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:47:17 crc kubenswrapper[4766]: I0129 11:47:17.236153 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca25e412-9c10-45d3-84b4-a8f059ddcfbc" path="/var/lib/kubelet/pods/ca25e412-9c10-45d3-84b4-a8f059ddcfbc/volumes" Jan 29 11:47:18 crc kubenswrapper[4766]: I0129 11:47:18.458569 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:47:18 crc kubenswrapper[4766]: I0129 11:47:18.554733 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:47:18 crc kubenswrapper[4766]: I0129 11:47:18.556623 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="dnsmasq-dns" containerID="cri-o://ecbdaa68778e5b026bb15de99d381a247817155af659c96860c67bd842555592" gracePeriod=10 Jan 29 11:47:18 crc kubenswrapper[4766]: I0129 11:47:18.695995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:47:18 crc kubenswrapper[4766]: I0129 11:47:18.741847 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071297 4766 generic.go:334] "Generic (PLEG): container finished" podID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerID="ecbdaa68778e5b026bb15de99d381a247817155af659c96860c67bd842555592" exitCode=0 Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" event={"ID":"4c8b3024-3e34-488a-8cea-ec3ee57fda99","Type":"ContainerDied","Data":"ecbdaa68778e5b026bb15de99d381a247817155af659c96860c67bd842555592"} Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071419 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" event={"ID":"4c8b3024-3e34-488a-8cea-ec3ee57fda99","Type":"ContainerDied","Data":"2543df0c60aee6a3f5673e8726128d2ee28292bf13d068f116bb06773f22a7a6"} Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071430 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2543df0c60aee6a3f5673e8726128d2ee28292bf13d068f116bb06773f22a7a6" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071770 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="cinder-scheduler" containerID="cri-o://db668750241607632455ecc4015916149a6a26fb010ad04cf265aaeb0ebb4649" gracePeriod=30 Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.071866 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="probe" containerID="cri-o://64d83de583bde671fb785f7752fa4aa30c03acb174e1dee3aaeafc4d860250c2" gracePeriod=30 Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.085390 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092179 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092475 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092496 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092598 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.092636 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcdc2\" (UniqueName: \"kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2\") pod \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\" (UID: \"4c8b3024-3e34-488a-8cea-ec3ee57fda99\") " Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.117602 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2" (OuterVolumeSpecName: "kube-api-access-dcdc2") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "kube-api-access-dcdc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.157856 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.179665 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config" (OuterVolumeSpecName: "config") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.185886 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.189999 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.192437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c8b3024-3e34-488a-8cea-ec3ee57fda99" (UID: "4c8b3024-3e34-488a-8cea-ec3ee57fda99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195223 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195247 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195258 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195267 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcdc2\" (UniqueName: \"kubernetes.io/projected/4c8b3024-3e34-488a-8cea-ec3ee57fda99-kube-api-access-dcdc2\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195276 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4766]: I0129 11:47:19.195285 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8b3024-3e34-488a-8cea-ec3ee57fda99-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.083168 4766 generic.go:334] "Generic (PLEG): container finished" podID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerID="64d83de583bde671fb785f7752fa4aa30c03acb174e1dee3aaeafc4d860250c2" exitCode=0 Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.083276 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerDied","Data":"64d83de583bde671fb785f7752fa4aa30c03acb174e1dee3aaeafc4d860250c2"} Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.083543 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6cnjd" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.108906 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.131682 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6cnjd"] Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.495100 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:47:20 crc kubenswrapper[4766]: E0129 11:47:20.495467 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="init" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.495483 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="init" Jan 29 11:47:20 crc kubenswrapper[4766]: E0129 11:47:20.495493 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="dnsmasq-dns" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.495500 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="dnsmasq-dns" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.495668 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" containerName="dnsmasq-dns" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.497043 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.500257 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.501245 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.508323 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.509920 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5znsl\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524309 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524388 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524547 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524587 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.524751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.627630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.627777 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5znsl\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.627851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.627916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.627987 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.628019 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.628075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.628131 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.629114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.629321 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.633790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.634992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.635750 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.642855 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.647568 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.650710 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5znsl\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl\") pod \"swift-proxy-667bcbf4cf-kw66x\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:20 crc kubenswrapper[4766]: I0129 11:47:20.814996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:21 crc kubenswrapper[4766]: I0129 11:47:21.240716 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8b3024-3e34-488a-8cea-ec3ee57fda99" path="/var/lib/kubelet/pods/4c8b3024-3e34-488a-8cea-ec3ee57fda99/volumes" Jan 29 11:47:21 crc kubenswrapper[4766]: I0129 11:47:21.419077 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:47:21 crc kubenswrapper[4766]: W0129 11:47:21.430457 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55700325_5d09_47fc_adad_06c1a8fbbee4.slice/crio-a235dfc083948f94bf2c40bf7cd1dc38db67fc96f1761fb59b0c274560a45e5c WatchSource:0}: Error finding container a235dfc083948f94bf2c40bf7cd1dc38db67fc96f1761fb59b0c274560a45e5c: Status 404 returned error can't find the container with id a235dfc083948f94bf2c40bf7cd1dc38db67fc96f1761fb59b0c274560a45e5c Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.107001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerStarted","Data":"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f"} Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.107375 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.107392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerStarted","Data":"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f"} Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.107403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerStarted","Data":"a235dfc083948f94bf2c40bf7cd1dc38db67fc96f1761fb59b0c274560a45e5c"} Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.133503 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-667bcbf4cf-kw66x" podStartSLOduration=2.133483115 podStartE2EDuration="2.133483115s" podCreationTimestamp="2026-01-29 11:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:22.132901659 +0000 UTC m=+1579.245294680" watchObservedRunningTime="2026-01-29 11:47:22.133483115 +0000 UTC m=+1579.245876126" Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.835224 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.835589 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-central-agent" containerID="cri-o://ac1cc8319719cbef03c5a82f3a7ef72fb4e425fac3df50f060722739c6183ff7" gracePeriod=30 Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.835723 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="proxy-httpd" containerID="cri-o://58d98541d32a0de2879b147c35e937a70f935dff22781671baa8a4bfe2955a39" gracePeriod=30 Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.835763 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="sg-core" containerID="cri-o://719ed8a40266897095fd4aac44082047f26bfc965df4c832be6919779bd55106" gracePeriod=30 Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.835793 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-notification-agent" containerID="cri-o://644a1eed80c6bd834a2d1d821fe616dd82c719bf8607d75bf46fe7d75bdf3811" gracePeriod=30 Jan 29 11:47:22 crc kubenswrapper[4766]: I0129 11:47:22.849228 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": EOF" Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.149330 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerID="58d98541d32a0de2879b147c35e937a70f935dff22781671baa8a4bfe2955a39" exitCode=0 Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.149360 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerID="719ed8a40266897095fd4aac44082047f26bfc965df4c832be6919779bd55106" exitCode=2 Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.149420 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerDied","Data":"58d98541d32a0de2879b147c35e937a70f935dff22781671baa8a4bfe2955a39"} Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.149451 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerDied","Data":"719ed8a40266897095fd4aac44082047f26bfc965df4c832be6919779bd55106"} Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.154856 4766 generic.go:334] "Generic (PLEG): container finished" podID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerID="db668750241607632455ecc4015916149a6a26fb010ad04cf265aaeb0ebb4649" exitCode=0 Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.154918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerDied","Data":"db668750241607632455ecc4015916149a6a26fb010ad04cf265aaeb0ebb4649"} Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.154997 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.849839 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.850403 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-log" containerID="cri-o://36a7b334ee749d9bdadb767f5fdf15c6eab854818ce783508b3830c80759ff69" gracePeriod=30 Jan 29 11:47:23 crc kubenswrapper[4766]: I0129 11:47:23.850548 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-httpd" containerID="cri-o://5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df" gracePeriod=30 Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.192633 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerID="644a1eed80c6bd834a2d1d821fe616dd82c719bf8607d75bf46fe7d75bdf3811" exitCode=0 Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.192668 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerID="ac1cc8319719cbef03c5a82f3a7ef72fb4e425fac3df50f060722739c6183ff7" exitCode=0 Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.192710 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerDied","Data":"644a1eed80c6bd834a2d1d821fe616dd82c719bf8607d75bf46fe7d75bdf3811"} Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.192734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerDied","Data":"ac1cc8319719cbef03c5a82f3a7ef72fb4e425fac3df50f060722739c6183ff7"} Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.211282 4766 generic.go:334] "Generic (PLEG): container finished" podID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerID="36a7b334ee749d9bdadb767f5fdf15c6eab854818ce783508b3830c80759ff69" exitCode=143 Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.211336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerDied","Data":"36a7b334ee749d9bdadb767f5fdf15c6eab854818ce783508b3830c80759ff69"} Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.988681 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.997364 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-log" containerID="cri-o://8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3" gracePeriod=30 Jan 29 11:47:24 crc kubenswrapper[4766]: I0129 11:47:24.997631 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-httpd" containerID="cri-o://234cd293c0fd1f05219c6c823a7a1f6d478a64dd1cfb8d4f9c760d4edb64cb35" gracePeriod=30 Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.227521 4766 generic.go:334] "Generic (PLEG): container finished" podID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerID="8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3" exitCode=143 Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.235813 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-862hs"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.237128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerDied","Data":"8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3"} Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.237235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.259261 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-862hs"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.326105 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hdfk6"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.327359 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.339818 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.340303 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qzh\" (UniqueName: \"kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.348266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hdfk6"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.369211 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ba9c-account-create-update-fvdrk"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.370851 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.374201 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.392973 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ba9c-account-create-update-fvdrk"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.442857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfxp\" (UniqueName: \"kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.442935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.442957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.443010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6qzh\" (UniqueName: \"kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.443964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.456952 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9kz8m"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.458055 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.469664 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kz8m"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.486636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6qzh\" (UniqueName: \"kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh\") pod \"nova-api-db-create-862hs\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.533962 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5c55-account-create-update-scf58"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.535069 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.540904 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.544533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfxp\" (UniqueName: \"kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.544645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.544929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgncs\" (UniqueName: \"kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.544967 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.545662 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.562170 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c55-account-create-update-scf58"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.562869 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.574611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfxp\" (UniqueName: \"kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp\") pod \"nova-cell0-db-create-hdfk6\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.646627 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpjjw\" (UniqueName: \"kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.646688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.646716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pgh9\" (UniqueName: \"kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.646920 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.647345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.647497 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgncs\" (UniqueName: \"kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.647528 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.648270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.666622 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgncs\" (UniqueName: \"kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs\") pod \"nova-api-ba9c-account-create-update-fvdrk\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.693464 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.731959 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a5ac-account-create-update-k9rgg"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.733042 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.741862 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.746343 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a5ac-account-create-update-k9rgg"] Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.748948 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpjjw\" (UniqueName: \"kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.748980 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.749002 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pgh9\" (UniqueName: \"kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.749020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.749743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.750565 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.774205 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pgh9\" (UniqueName: \"kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9\") pod \"nova-cell0-5c55-account-create-update-scf58\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.791260 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpjjw\" (UniqueName: \"kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw\") pod \"nova-cell1-db-create-9kz8m\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.834681 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.851071 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqpr\" (UniqueName: \"kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.851275 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.854584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.952967 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.953142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqpr\" (UniqueName: \"kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.953936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:25 crc kubenswrapper[4766]: I0129 11:47:25.979821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqpr\" (UniqueName: \"kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr\") pod \"nova-cell1-a5ac-account-create-update-k9rgg\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:26 crc kubenswrapper[4766]: I0129 11:47:26.075120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:27 crc kubenswrapper[4766]: I0129 11:47:27.089796 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 11:47:27 crc kubenswrapper[4766]: E0129 11:47:27.104782 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ffbf34_3350_41d4_ae62_94700d3e40bc.slice/crio-5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:47:27 crc kubenswrapper[4766]: I0129 11:47:27.392216 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:47:28 crc kubenswrapper[4766]: I0129 11:47:28.256393 4766 generic.go:334] "Generic (PLEG): container finished" podID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerID="5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df" exitCode=0 Jan 29 11:47:28 crc kubenswrapper[4766]: I0129 11:47:28.256560 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerDied","Data":"5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df"} Jan 29 11:47:28 crc kubenswrapper[4766]: I0129 11:47:28.258788 4766 generic.go:334] "Generic (PLEG): container finished" podID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerID="234cd293c0fd1f05219c6c823a7a1f6d478a64dd1cfb8d4f9c760d4edb64cb35" exitCode=0 Jan 29 11:47:28 crc kubenswrapper[4766]: I0129 11:47:28.258817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerDied","Data":"234cd293c0fd1f05219c6c823a7a1f6d478a64dd1cfb8d4f9c760d4edb64cb35"} Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.348329 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440453 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440541 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440569 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.440737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbsz8\" (UniqueName: \"kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8\") pod \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\" (UID: \"4d07c99d-fe00-4217-8d7a-2f848e825bf1\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.442637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.457031 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8" (OuterVolumeSpecName: "kube-api-access-tbsz8") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "kube-api-access-tbsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.460866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts" (OuterVolumeSpecName: "scripts") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.468631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.547662 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.548037 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.548101 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d07c99d-fe00-4217-8d7a-2f848e825bf1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.548168 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbsz8\" (UniqueName: \"kubernetes.io/projected/4d07c99d-fe00-4217-8d7a-2f848e825bf1-kube-api-access-tbsz8\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.553680 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.627902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.654699 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.734323 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data" (OuterVolumeSpecName: "config-data") pod "4d07c99d-fe00-4217-8d7a-2f848e825bf1" (UID: "4d07c99d-fe00-4217-8d7a-2f848e825bf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.746849 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757186 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757287 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757661 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757724 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.757850 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slm47\" (UniqueName: \"kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47\") pod \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\" (UID: \"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.758361 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07c99d-fe00-4217-8d7a-2f848e825bf1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.759216 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.759505 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.764549 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts" (OuterVolumeSpecName: "scripts") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.766243 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47" (OuterVolumeSpecName: "kube-api-access-slm47") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "kube-api-access-slm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.808967 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.830819 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.844540 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860103 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860228 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860372 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n89wm\" (UniqueName: \"kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860635 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860732 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.860773 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data\") pod \"34b4e558-a02f-4604-91ce-b99c34e061dd\" (UID: \"34b4e558-a02f-4604-91ce-b99c34e061dd\") " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs" (OuterVolumeSpecName: "logs") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861625 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861717 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slm47\" (UniqueName: \"kubernetes.io/projected/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-kube-api-access-slm47\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861803 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861880 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.861947 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.862020 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b4e558-a02f-4604-91ce-b99c34e061dd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.862092 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.874860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.881831 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts" (OuterVolumeSpecName: "scripts") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.919322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm" (OuterVolumeSpecName: "kube-api-access-n89wm") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "kube-api-access-n89wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.950078 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data" (OuterVolumeSpecName: "config-data") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.967833 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n89wm\" (UniqueName: \"kubernetes.io/projected/34b4e558-a02f-4604-91ce-b99c34e061dd-kube-api-access-n89wm\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.967865 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.967884 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.967894 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:30 crc kubenswrapper[4766]: I0129 11:47:30.997037 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.015556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" (UID: "7c8b47a0-0ceb-45ec-bbc4-9747d92f0619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.049560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.059631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.075133 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.075168 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.075182 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.075196 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.144570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data" (OuterVolumeSpecName: "config-data") pod "34b4e558-a02f-4604-91ce-b99c34e061dd" (UID: "34b4e558-a02f-4604-91ce-b99c34e061dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.185334 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b4e558-a02f-4604-91ce-b99c34e061dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.250965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a5ac-account-create-update-k9rgg"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.261393 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ba9c-account-create-update-fvdrk"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.281171 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.283851 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kz8m"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.293465 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hdfk6"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.311717 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-862hs"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.364005 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d07c99d-fe00-4217-8d7a-2f848e825bf1","Type":"ContainerDied","Data":"a1c16d19373476f00e7375be0da993d0b9b1954159e12d2ff30a03783d1e222b"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.364128 4766 scope.go:117] "RemoveContainer" containerID="64d83de583bde671fb785f7752fa4aa30c03acb174e1dee3aaeafc4d860250c2" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.364432 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.396092 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" event={"ID":"fd606a73-05c8-4c8f-b4f2-281a9f308e43","Type":"ContainerStarted","Data":"a308e2fc0724b7c94bce16c8f7ab730189faadb4d94be4f8cdf9e3557815572f"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.436936 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.437268 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6854899c48-wx94v" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-api" containerID="cri-o://5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896" gracePeriod=30 Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.437836 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6854899c48-wx94v" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-httpd" containerID="cri-o://39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9" gracePeriod=30 Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.443604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f484f11d-a20d-4d69-9619-d5f8df022bd7","Type":"ContainerStarted","Data":"175dd17b34bc65355780323019254f2f358e93d247a0f42a84345efd3579c3e2"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.447090 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.481927 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.484503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" event={"ID":"e5484d77-284b-4422-aa8a-c44761f4c8e9","Type":"ContainerStarted","Data":"c1afcf6becbd79acb5d010cc4b1f76e5f2da252d7e5ba96b9516c58dfa6b3275"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.509629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8b47a0-0ceb-45ec-bbc4-9747d92f0619","Type":"ContainerDied","Data":"bd0bd2bd35b4da5af9db66a954d6f6dced9e75b974f30758607f551e695f9337"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.509758 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.535791 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536238 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="sg-core" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536256 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="sg-core" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536266 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536273 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536293 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="cinder-scheduler" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536299 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="cinder-scheduler" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536311 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-central-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536316 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-central-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536329 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="probe" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536337 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="probe" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536348 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536367 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-notification-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536373 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-notification-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.536383 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="proxy-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536388 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="proxy-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536594 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-notification-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536607 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536617 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="proxy-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536630 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="probe" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536650 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="ceilometer-central-agent" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536659 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" containerName="sg-core" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536667 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" containerName="cinder-scheduler" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.536679 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.537800 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.538902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34b4e558-a02f-4604-91ce-b99c34e061dd","Type":"ContainerDied","Data":"3fde6fc2fa07629373f9c81475ac23674579ddb924a03cdcbd1726563898b176"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.538994 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.540205 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.541360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kz8m" event={"ID":"cae3e2e2-58b5-4a7a-ae77-7712d85990ea","Type":"ContainerStarted","Data":"93be5bf2a605132ef1b3bb51842c92fe9d507e13e31551c108ed61f43315a947"} Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.595253 4766 scope.go:117] "RemoveContainer" containerID="db668750241607632455ecc4015916149a6a26fb010ad04cf265aaeb0ebb4649" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.616876 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.688436 4766 scope.go:117] "RemoveContainer" containerID="58d98541d32a0de2879b147c35e937a70f935dff22781671baa8a4bfe2955a39" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.696490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709187 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709247 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709387 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grnt\" (UniqueName: \"kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709451 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.709617 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.728645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run\") pod \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\" (UID: \"d5ffbf34-3350-41d4-ae62-94700d3e40bc\") " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729182 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729290 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8l8\" (UniqueName: \"kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729441 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729508 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.729542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.731879 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.737691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs" (OuterVolumeSpecName: "logs") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.742032 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts" (OuterVolumeSpecName: "scripts") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.746055 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c55-account-create-update-scf58"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.751045 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.512142392 podStartE2EDuration="15.751025105s" podCreationTimestamp="2026-01-29 11:47:16 +0000 UTC" firstStartedPulling="2026-01-29 11:47:17.002111764 +0000 UTC m=+1574.114504765" lastFinishedPulling="2026-01-29 11:47:30.240994467 +0000 UTC m=+1587.353387478" observedRunningTime="2026-01-29 11:47:31.484859925 +0000 UTC m=+1588.597252936" watchObservedRunningTime="2026-01-29 11:47:31.751025105 +0000 UTC m=+1588.863418116" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.752185 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt" (OuterVolumeSpecName: "kube-api-access-2grnt") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "kube-api-access-2grnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.783894 4766 scope.go:117] "RemoveContainer" containerID="719ed8a40266897095fd4aac44082047f26bfc965df4c832be6919779bd55106" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.791949 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.793454 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.810853 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.822726 4766 scope.go:117] "RemoveContainer" containerID="644a1eed80c6bd834a2d1d821fe616dd82c719bf8607d75bf46fe7d75bdf3811" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.825547 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.834651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.834717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.834892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.834920 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.834986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8l8\" (UniqueName: \"kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835205 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835216 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835227 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grnt\" (UniqueName: \"kubernetes.io/projected/d5ffbf34-3350-41d4-ae62-94700d3e40bc-kube-api-access-2grnt\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835246 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835255 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ffbf34-3350-41d4-ae62-94700d3e40bc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.835944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.839281 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.853329 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.854009 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.854029 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: E0129 11:47:31.854039 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.854046 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.854295 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-log" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.854329 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" containerName="glance-httpd" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.862222 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.868760 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.870012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.870179 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.870202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.871426 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.882811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.885782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.892645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8l8\" (UniqueName: \"kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8\") pod \"cinder-scheduler-0\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.897036 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.899756 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.899855 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.903238 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.903684 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.952698 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:31 crc kubenswrapper[4766]: I0129 11:47:31.986286 4766 scope.go:117] "RemoveContainer" containerID="ac1cc8319719cbef03c5a82f3a7ef72fb4e425fac3df50f060722739c6183ff7" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042549 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042600 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042626 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ff7\" (UniqueName: \"kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042651 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042708 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042743 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042797 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042831 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042934 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hsjv\" (UniqueName: \"kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.042975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.145665 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ff7\" (UniqueName: \"kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.146987 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147003 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147024 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147077 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hsjv\" (UniqueName: \"kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147094 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147197 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147206 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.147930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.153170 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.153582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.153936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.154140 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.156672 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.158723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.185590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.186081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.186547 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.188176 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.188667 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.203379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hsjv\" (UniqueName: \"kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv\") pod \"ceilometer-0\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.218673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.219398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data" (OuterVolumeSpecName: "config-data") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.228315 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ff7\" (UniqueName: \"kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.249202 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.249238 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.259551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d5ffbf34-3350-41d4-ae62-94700d3e40bc" (UID: "d5ffbf34-3350-41d4-ae62-94700d3e40bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.303541 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.358870 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ffbf34-3350-41d4-ae62-94700d3e40bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.394686 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.471195 4766 scope.go:117] "RemoveContainer" containerID="234cd293c0fd1f05219c6c823a7a1f6d478a64dd1cfb8d4f9c760d4edb64cb35" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.505838 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.553754 4766 scope.go:117] "RemoveContainer" containerID="8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.625973 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ffbf34-3350-41d4-ae62-94700d3e40bc","Type":"ContainerDied","Data":"e38b8b0d5a682e48caea6b042f7191cc7fec9bcef78cf338864a60b8a430492e"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.626078 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.655164 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-862hs" event={"ID":"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e","Type":"ContainerStarted","Data":"27e8acc57c943d71e1f7a1c897549e9c2b221f2a051e4252b5048fb9ed9084e2"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.679287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hdfk6" event={"ID":"075438aa-afe6-4a7c-aa4a-a9b89406b170","Type":"ContainerStarted","Data":"0bd5e88b557bb92be5bef66e522342bbd2342e45ca18a5a144f3ef26f6fc5ff0"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.680800 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-862hs" podStartSLOduration=7.680782461 podStartE2EDuration="7.680782461s" podCreationTimestamp="2026-01-29 11:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:32.676867783 +0000 UTC m=+1589.789260794" watchObservedRunningTime="2026-01-29 11:47:32.680782461 +0000 UTC m=+1589.793175472" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.690865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kz8m" event={"ID":"cae3e2e2-58b5-4a7a-ae77-7712d85990ea","Type":"ContainerStarted","Data":"d7757455cbb85c897a50eea066ec215bb182c05d10d10b012cbf172f9eac52e9"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.711530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c55-account-create-update-scf58" event={"ID":"b251c8b1-bef8-4e31-86dd-fdfca1dc0594","Type":"ContainerStarted","Data":"2eb4db84d1fc8c451eb5eb7056a0641093709a02d8ccff8a80151f403e561571"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.725616 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerID="39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9" exitCode=0 Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.726222 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerDied","Data":"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.741535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" event={"ID":"fd606a73-05c8-4c8f-b4f2-281a9f308e43","Type":"ContainerStarted","Data":"a2d18f5ec3dadbff7d926f5e1d5f9e45f90f6843b5fe1c8d7b1e4834d350542d"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.753400 4766 generic.go:334] "Generic (PLEG): container finished" podID="e5484d77-284b-4422-aa8a-c44761f4c8e9" containerID="5c4d7a8ef15ea08f1047185923173dff7aaa7691455c34c2f8cea7f984b1d2d4" exitCode=0 Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.753513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" event={"ID":"e5484d77-284b-4422-aa8a-c44761f4c8e9","Type":"ContainerDied","Data":"5c4d7a8ef15ea08f1047185923173dff7aaa7691455c34c2f8cea7f984b1d2d4"} Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.757163 4766 scope.go:117] "RemoveContainer" containerID="5821b006dbee2cf56a29b92d60e12310db1ab8ccb5a58f1a49999ad2562e76df" Jan 29 11:47:32 crc kubenswrapper[4766]: E0129 11:47:32.757230 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3\": container with ID starting with 8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3 not found: ID does not exist" containerID="8710b4eb2f7c865ac805f5eb141b278b7f982a89471c605132cd4d9e1b77baf3" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.768334 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" podStartSLOduration=7.7683114490000005 podStartE2EDuration="7.768311449s" podCreationTimestamp="2026-01-29 11:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:32.75682623 +0000 UTC m=+1589.869219251" watchObservedRunningTime="2026-01-29 11:47:32.768311449 +0000 UTC m=+1589.880704460" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.783512 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.802009 4766 scope.go:117] "RemoveContainer" containerID="36a7b334ee749d9bdadb767f5fdf15c6eab854818ce783508b3830c80759ff69" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.849058 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.861185 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.874761 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.882790 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.889472 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.906010 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:32 crc kubenswrapper[4766]: I0129 11:47:32.931743 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzbb\" (UniqueName: \"kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020320 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020449 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020475 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.020514 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122094 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122221 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122294 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzbb\" (UniqueName: \"kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.122507 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.124892 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.125798 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.133810 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.135957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.136886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.145013 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.151064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.156436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.159329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzbb\" (UniqueName: \"kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: W0129 11:47:33.173049 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1996793f_f3ca_4559_97d6_867f0d0a2b61.slice/crio-d714fa11d46ffb614c3c30a66523c9e5b6c471d6f8471c3847aee783b2cb5d33 WatchSource:0}: Error finding container d714fa11d46ffb614c3c30a66523c9e5b6c471d6f8471c3847aee783b2cb5d33: Status 404 returned error can't find the container with id d714fa11d46ffb614c3c30a66523c9e5b6c471d6f8471c3847aee783b2cb5d33 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.214110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.216209 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.250309 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4e558-a02f-4604-91ce-b99c34e061dd" path="/var/lib/kubelet/pods/34b4e558-a02f-4604-91ce-b99c34e061dd/volumes" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.251373 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d07c99d-fe00-4217-8d7a-2f848e825bf1" path="/var/lib/kubelet/pods/4d07c99d-fe00-4217-8d7a-2f848e825bf1/volumes" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.252616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8b47a0-0ceb-45ec-bbc4-9747d92f0619" path="/var/lib/kubelet/pods/7c8b47a0-0ceb-45ec-bbc4-9747d92f0619/volumes" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.254888 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ffbf34-3350-41d4-ae62-94700d3e40bc" path="/var/lib/kubelet/pods/d5ffbf34-3350-41d4-ae62-94700d3e40bc/volumes" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.506899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.777980 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerStarted","Data":"6bd8014422c1896086174900fdf38208977bacb62378c914bc6a3506c6fb3e4c"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.779071 4766 generic.go:334] "Generic (PLEG): container finished" podID="b251c8b1-bef8-4e31-86dd-fdfca1dc0594" containerID="b47293f7cec9d0af51cd2d23e9b89b2afc946db075024da64ce50cf1a5082bcb" exitCode=0 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.779126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c55-account-create-update-scf58" event={"ID":"b251c8b1-bef8-4e31-86dd-fdfca1dc0594","Type":"ContainerDied","Data":"b47293f7cec9d0af51cd2d23e9b89b2afc946db075024da64ce50cf1a5082bcb"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.779994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerStarted","Data":"d714fa11d46ffb614c3c30a66523c9e5b6c471d6f8471c3847aee783b2cb5d33"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.781128 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd606a73-05c8-4c8f-b4f2-281a9f308e43" containerID="a2d18f5ec3dadbff7d926f5e1d5f9e45f90f6843b5fe1c8d7b1e4834d350542d" exitCode=0 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.781161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" event={"ID":"fd606a73-05c8-4c8f-b4f2-281a9f308e43","Type":"ContainerDied","Data":"a2d18f5ec3dadbff7d926f5e1d5f9e45f90f6843b5fe1c8d7b1e4834d350542d"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.783921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerStarted","Data":"6ee9c30a2b64eec16ccf6d3b12b79e122359a7365e95e868d280a6f09522ec08"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.794181 4766 generic.go:334] "Generic (PLEG): container finished" podID="1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" containerID="9b375c5530a1a3775341ca3d65d9013b6d89fdf7753d546f691d72af15d5a3a6" exitCode=0 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.794290 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-862hs" event={"ID":"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e","Type":"ContainerDied","Data":"9b375c5530a1a3775341ca3d65d9013b6d89fdf7753d546f691d72af15d5a3a6"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.823360 4766 generic.go:334] "Generic (PLEG): container finished" podID="075438aa-afe6-4a7c-aa4a-a9b89406b170" containerID="0f81dc60f6935f2c85c58f5ca30e75b0ad34b984d68a583addc429fb98cbd09d" exitCode=0 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.823565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hdfk6" event={"ID":"075438aa-afe6-4a7c-aa4a-a9b89406b170","Type":"ContainerDied","Data":"0f81dc60f6935f2c85c58f5ca30e75b0ad34b984d68a583addc429fb98cbd09d"} Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.835047 4766 generic.go:334] "Generic (PLEG): container finished" podID="cae3e2e2-58b5-4a7a-ae77-7712d85990ea" containerID="d7757455cbb85c897a50eea066ec215bb182c05d10d10b012cbf172f9eac52e9" exitCode=0 Jan 29 11:47:33 crc kubenswrapper[4766]: I0129 11:47:33.835275 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kz8m" event={"ID":"cae3e2e2-58b5-4a7a-ae77-7712d85990ea","Type":"ContainerDied","Data":"d7757455cbb85c897a50eea066ec215bb182c05d10d10b012cbf172f9eac52e9"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.151155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:47:34 crc kubenswrapper[4766]: W0129 11:47:34.186089 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1ffb49_b314_4d31_94d6_de70e35d917e.slice/crio-67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf WatchSource:0}: Error finding container 67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf: Status 404 returned error can't find the container with id 67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.253939 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.360393 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts\") pod \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.360585 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpjjw\" (UniqueName: \"kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw\") pod \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\" (UID: \"cae3e2e2-58b5-4a7a-ae77-7712d85990ea\") " Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.362077 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cae3e2e2-58b5-4a7a-ae77-7712d85990ea" (UID: "cae3e2e2-58b5-4a7a-ae77-7712d85990ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.367959 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw" (OuterVolumeSpecName: "kube-api-access-wpjjw") pod "cae3e2e2-58b5-4a7a-ae77-7712d85990ea" (UID: "cae3e2e2-58b5-4a7a-ae77-7712d85990ea"). InnerVolumeSpecName "kube-api-access-wpjjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.402854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.462814 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpjjw\" (UniqueName: \"kubernetes.io/projected/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-kube-api-access-wpjjw\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.462853 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae3e2e2-58b5-4a7a-ae77-7712d85990ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.564393 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts\") pod \"e5484d77-284b-4422-aa8a-c44761f4c8e9\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.564781 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kqpr\" (UniqueName: \"kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr\") pod \"e5484d77-284b-4422-aa8a-c44761f4c8e9\" (UID: \"e5484d77-284b-4422-aa8a-c44761f4c8e9\") " Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.567846 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5484d77-284b-4422-aa8a-c44761f4c8e9" (UID: "e5484d77-284b-4422-aa8a-c44761f4c8e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.569269 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr" (OuterVolumeSpecName: "kube-api-access-2kqpr") pod "e5484d77-284b-4422-aa8a-c44761f4c8e9" (UID: "e5484d77-284b-4422-aa8a-c44761f4c8e9"). InnerVolumeSpecName "kube-api-access-2kqpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.667531 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kqpr\" (UniqueName: \"kubernetes.io/projected/e5484d77-284b-4422-aa8a-c44761f4c8e9-kube-api-access-2kqpr\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.667575 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5484d77-284b-4422-aa8a-c44761f4c8e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.883325 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerStarted","Data":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.897978 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kz8m" event={"ID":"cae3e2e2-58b5-4a7a-ae77-7712d85990ea","Type":"ContainerDied","Data":"93be5bf2a605132ef1b3bb51842c92fe9d507e13e31551c108ed61f43315a947"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.898019 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93be5bf2a605132ef1b3bb51842c92fe9d507e13e31551c108ed61f43315a947" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.898084 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kz8m" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.900502 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerStarted","Data":"67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.926707 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerStarted","Data":"5fa3e2236ec63b27db194527bb716839b21f9cea6f579d3762f4f41dced8ddd1"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.960763 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" event={"ID":"e5484d77-284b-4422-aa8a-c44761f4c8e9","Type":"ContainerDied","Data":"c1afcf6becbd79acb5d010cc4b1f76e5f2da252d7e5ba96b9516c58dfa6b3275"} Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.960807 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1afcf6becbd79acb5d010cc4b1f76e5f2da252d7e5ba96b9516c58dfa6b3275" Jan 29 11:47:34 crc kubenswrapper[4766]: I0129 11:47:34.960989 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a5ac-account-create-update-k9rgg" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.009149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerStarted","Data":"14e4b623cc33e1869a58abf1c35db16e3909d3d2a092250a9f93c7d83fa741ec"} Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.034311 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.034288823 podStartE2EDuration="4.034288823s" podCreationTimestamp="2026-01-29 11:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:35.029239563 +0000 UTC m=+1592.141632584" watchObservedRunningTime="2026-01-29 11:47:35.034288823 +0000 UTC m=+1592.146681844" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.712923 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.838522 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.871225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.883200 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts\") pod \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.883348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgncs\" (UniqueName: \"kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs\") pod \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\" (UID: \"fd606a73-05c8-4c8f-b4f2-281a9f308e43\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.883924 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd606a73-05c8-4c8f-b4f2-281a9f308e43" (UID: "fd606a73-05c8-4c8f-b4f2-281a9f308e43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.889493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs" (OuterVolumeSpecName: "kube-api-access-xgncs") pod "fd606a73-05c8-4c8f-b4f2-281a9f308e43" (UID: "fd606a73-05c8-4c8f-b4f2-281a9f308e43"). InnerVolumeSpecName "kube-api-access-xgncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.914728 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.987747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts\") pod \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.987894 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfxp\" (UniqueName: \"kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp\") pod \"075438aa-afe6-4a7c-aa4a-a9b89406b170\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.988000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts\") pod \"075438aa-afe6-4a7c-aa4a-a9b89406b170\" (UID: \"075438aa-afe6-4a7c-aa4a-a9b89406b170\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.988063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6qzh\" (UniqueName: \"kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh\") pod \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\" (UID: \"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e\") " Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.988595 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd606a73-05c8-4c8f-b4f2-281a9f308e43-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.988616 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgncs\" (UniqueName: \"kubernetes.io/projected/fd606a73-05c8-4c8f-b4f2-281a9f308e43-kube-api-access-xgncs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.988937 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "075438aa-afe6-4a7c-aa4a-a9b89406b170" (UID: "075438aa-afe6-4a7c-aa4a-a9b89406b170"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:35 crc kubenswrapper[4766]: I0129 11:47:35.989472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" (UID: "1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.007906 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp" (OuterVolumeSpecName: "kube-api-access-gnfxp") pod "075438aa-afe6-4a7c-aa4a-a9b89406b170" (UID: "075438aa-afe6-4a7c-aa4a-a9b89406b170"). InnerVolumeSpecName "kube-api-access-gnfxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.035356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh" (OuterVolumeSpecName: "kube-api-access-l6qzh") pod "1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" (UID: "1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e"). InnerVolumeSpecName "kube-api-access-l6qzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.084630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerStarted","Data":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.097550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pgh9\" (UniqueName: \"kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9\") pod \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.098323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts\") pod \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\" (UID: \"b251c8b1-bef8-4e31-86dd-fdfca1dc0594\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.099119 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfxp\" (UniqueName: \"kubernetes.io/projected/075438aa-afe6-4a7c-aa4a-a9b89406b170-kube-api-access-gnfxp\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.099153 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075438aa-afe6-4a7c-aa4a-a9b89406b170-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.099163 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6qzh\" (UniqueName: \"kubernetes.io/projected/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-kube-api-access-l6qzh\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.099171 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.099806 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b251c8b1-bef8-4e31-86dd-fdfca1dc0594" (UID: "b251c8b1-bef8-4e31-86dd-fdfca1dc0594"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.104318 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9" (OuterVolumeSpecName: "kube-api-access-4pgh9") pod "b251c8b1-bef8-4e31-86dd-fdfca1dc0594" (UID: "b251c8b1-bef8-4e31-86dd-fdfca1dc0594"). InnerVolumeSpecName "kube-api-access-4pgh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.112848 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c55-account-create-update-scf58" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.112846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c55-account-create-update-scf58" event={"ID":"b251c8b1-bef8-4e31-86dd-fdfca1dc0594","Type":"ContainerDied","Data":"2eb4db84d1fc8c451eb5eb7056a0641093709a02d8ccff8a80151f403e561571"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.114140 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb4db84d1fc8c451eb5eb7056a0641093709a02d8ccff8a80151f403e561571" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.119286 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerStarted","Data":"5f484b8e00e79b044b603b23bc146e1024f8a58609cafd703ef2e0617e674445"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.132105 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" event={"ID":"fd606a73-05c8-4c8f-b4f2-281a9f308e43","Type":"ContainerDied","Data":"a308e2fc0724b7c94bce16c8f7ab730189faadb4d94be4f8cdf9e3557815572f"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.132155 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a308e2fc0724b7c94bce16c8f7ab730189faadb4d94be4f8cdf9e3557815572f" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.132485 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ba9c-account-create-update-fvdrk" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.155876 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerStarted","Data":"7757bdf84a1a20ce16552c3e15762e105f6f1602c859ce9e79be4ff4bbd3a36d"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.162295 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerStarted","Data":"0a988c9f46e3a70b4049e9abe888a41821aad0a9143a7ab9d80be40f836fe69e"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.175070 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-862hs" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.175065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-862hs" event={"ID":"1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e","Type":"ContainerDied","Data":"27e8acc57c943d71e1f7a1c897549e9c2b221f2a051e4252b5048fb9ed9084e2"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.176129 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27e8acc57c943d71e1f7a1c897549e9c2b221f2a051e4252b5048fb9ed9084e2" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.188777 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.18875806 podStartE2EDuration="5.18875806s" podCreationTimestamp="2026-01-29 11:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:36.186349274 +0000 UTC m=+1593.298742285" watchObservedRunningTime="2026-01-29 11:47:36.18875806 +0000 UTC m=+1593.301151071" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.201505 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.201537 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pgh9\" (UniqueName: \"kubernetes.io/projected/b251c8b1-bef8-4e31-86dd-fdfca1dc0594-kube-api-access-4pgh9\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.206647 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hdfk6" event={"ID":"075438aa-afe6-4a7c-aa4a-a9b89406b170","Type":"ContainerDied","Data":"0bd5e88b557bb92be5bef66e522342bbd2342e45ca18a5a144f3ef26f6fc5ff0"} Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.206679 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd5e88b557bb92be5bef66e522342bbd2342e45ca18a5a144f3ef26f6fc5ff0" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.206731 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hdfk6" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.737736 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.900320 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.930107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle\") pod \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.930172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zg7j\" (UniqueName: \"kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j\") pod \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.930290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config\") pod \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.930406 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config\") pod \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.930438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs\") pod \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\" (UID: \"c0ab1cb8-dc08-4f72-a765-083f2a511a7e\") " Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.940040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j" (OuterVolumeSpecName: "kube-api-access-5zg7j") pod "c0ab1cb8-dc08-4f72-a765-083f2a511a7e" (UID: "c0ab1cb8-dc08-4f72-a765-083f2a511a7e"). InnerVolumeSpecName "kube-api-access-5zg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:36 crc kubenswrapper[4766]: I0129 11:47:36.971686 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c0ab1cb8-dc08-4f72-a765-083f2a511a7e" (UID: "c0ab1cb8-dc08-4f72-a765-083f2a511a7e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.020339 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config" (OuterVolumeSpecName: "config") pod "c0ab1cb8-dc08-4f72-a765-083f2a511a7e" (UID: "c0ab1cb8-dc08-4f72-a765-083f2a511a7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.033283 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zg7j\" (UniqueName: \"kubernetes.io/projected/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-kube-api-access-5zg7j\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.033404 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.033446 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.040785 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0ab1cb8-dc08-4f72-a765-083f2a511a7e" (UID: "c0ab1cb8-dc08-4f72-a765-083f2a511a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.049610 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c0ab1cb8-dc08-4f72-a765-083f2a511a7e" (UID: "c0ab1cb8-dc08-4f72-a765-083f2a511a7e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.134747 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.134774 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ab1cb8-dc08-4f72-a765-083f2a511a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.216584 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerStarted","Data":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.219197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerStarted","Data":"a5870626b08c5ff65aad3d62a1002578aa41b4503406b749e77a94df8bdaa959"} Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.221615 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerID="5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896" exitCode=0 Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.222170 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6854899c48-wx94v" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.223522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerDied","Data":"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896"} Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.223619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6854899c48-wx94v" event={"ID":"c0ab1cb8-dc08-4f72-a765-083f2a511a7e","Type":"ContainerDied","Data":"bd1537d0a18ffbf93abbfed297fc88ac8b764a746f51d33016ffc69a4d7c0bc5"} Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.224434 4766 scope.go:117] "RemoveContainer" containerID="39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.249501 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.249483949 podStartE2EDuration="5.249483949s" podCreationTimestamp="2026-01-29 11:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:47:37.247245066 +0000 UTC m=+1594.359638087" watchObservedRunningTime="2026-01-29 11:47:37.249483949 +0000 UTC m=+1594.361876960" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.257867 4766 scope.go:117] "RemoveContainer" containerID="5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.288625 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.305276 4766 scope.go:117] "RemoveContainer" containerID="39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.306896 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6854899c48-wx94v"] Jan 29 11:47:37 crc kubenswrapper[4766]: E0129 11:47:37.309136 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9\": container with ID starting with 39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9 not found: ID does not exist" containerID="39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.309186 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9"} err="failed to get container status \"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9\": rpc error: code = NotFound desc = could not find container \"39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9\": container with ID starting with 39e0f3e0ffe14a10427ef4dfc0519bb7bc13b268ccc6da302855ea96686846e9 not found: ID does not exist" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.309210 4766 scope.go:117] "RemoveContainer" containerID="5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896" Jan 29 11:47:37 crc kubenswrapper[4766]: E0129 11:47:37.309918 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896\": container with ID starting with 5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896 not found: ID does not exist" containerID="5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896" Jan 29 11:47:37 crc kubenswrapper[4766]: I0129 11:47:37.309960 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896"} err="failed to get container status \"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896\": rpc error: code = NotFound desc = could not find container \"5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896\": container with ID starting with 5569ca78fd49a0ab2d22d3c15c4e29ed104f29d7a18d5ef53dce7fddd9af6896 not found: ID does not exist" Jan 29 11:47:37 crc kubenswrapper[4766]: E0129 11:47:37.373722 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0ab1cb8_dc08_4f72_a765_083f2a511a7e.slice/crio-bd1537d0a18ffbf93abbfed297fc88ac8b764a746f51d33016ffc69a4d7c0bc5\": RecentStats: unable to find data in memory cache]" Jan 29 11:47:39 crc kubenswrapper[4766]: I0129 11:47:39.236586 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" path="/var/lib/kubelet/pods/c0ab1cb8-dc08-4f72-a765-083f2a511a7e/volumes" Jan 29 11:47:39 crc kubenswrapper[4766]: I0129 11:47:39.241509 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerStarted","Data":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} Jan 29 11:47:39 crc kubenswrapper[4766]: I0129 11:47:39.241698 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:47:39 crc kubenswrapper[4766]: I0129 11:47:39.270827 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.019105278 podStartE2EDuration="8.270792067s" podCreationTimestamp="2026-01-29 11:47:31 +0000 UTC" firstStartedPulling="2026-01-29 11:47:33.2269886 +0000 UTC m=+1590.339381611" lastFinishedPulling="2026-01-29 11:47:38.478675389 +0000 UTC m=+1595.591068400" observedRunningTime="2026-01-29 11:47:39.263595087 +0000 UTC m=+1596.375988098" watchObservedRunningTime="2026-01-29 11:47:39.270792067 +0000 UTC m=+1596.383185068" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.827438 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cv24v"] Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839763 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075438aa-afe6-4a7c-aa4a-a9b89406b170" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839795 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="075438aa-afe6-4a7c-aa4a-a9b89406b170" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839816 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b251c8b1-bef8-4e31-86dd-fdfca1dc0594" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839823 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b251c8b1-bef8-4e31-86dd-fdfca1dc0594" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839830 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd606a73-05c8-4c8f-b4f2-281a9f308e43" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839837 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd606a73-05c8-4c8f-b4f2-281a9f308e43" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839855 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-api" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839860 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-api" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839871 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-httpd" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839878 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-httpd" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839890 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3e2e2-58b5-4a7a-ae77-7712d85990ea" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839895 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3e2e2-58b5-4a7a-ae77-7712d85990ea" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839906 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839913 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: E0129 11:47:40.839924 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5484d77-284b-4422-aa8a-c44761f4c8e9" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.839931 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5484d77-284b-4422-aa8a-c44761f4c8e9" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840086 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-httpd" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840099 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="075438aa-afe6-4a7c-aa4a-a9b89406b170" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840110 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae3e2e2-58b5-4a7a-ae77-7712d85990ea" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840121 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd606a73-05c8-4c8f-b4f2-281a9f308e43" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840132 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5484d77-284b-4422-aa8a-c44761f4c8e9" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840141 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" containerName="mariadb-database-create" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840155 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ab1cb8-dc08-4f72-a765-083f2a511a7e" containerName="neutron-api" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840164 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b251c8b1-bef8-4e31-86dd-fdfca1dc0594" containerName="mariadb-account-create-update" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cv24v"] Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.840739 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.842795 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7mg9r" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.843829 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.847006 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.900594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.900719 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.900813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:40 crc kubenswrapper[4766]: I0129 11:47:40.901114 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkm7\" (UniqueName: \"kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.003325 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.003388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.003484 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxkm7\" (UniqueName: \"kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.003541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.009716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.011158 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.023246 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.026870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxkm7\" (UniqueName: \"kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7\") pod \"nova-cell0-conductor-db-sync-cv24v\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.040499 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.166761 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.321677 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-central-agent" containerID="cri-o://518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" gracePeriod=30 Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.322334 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="proxy-httpd" containerID="cri-o://11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" gracePeriod=30 Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.322389 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="sg-core" containerID="cri-o://353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" gracePeriod=30 Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.322453 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-notification-agent" containerID="cri-o://d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" gracePeriod=30 Jan 29 11:47:41 crc kubenswrapper[4766]: I0129 11:47:41.799315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cv24v"] Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.003854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.124513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.124799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.124909 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125062 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125265 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125433 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hsjv\" (UniqueName: \"kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv\") pod \"8c9155e4-f0e0-4edb-a814-7db1466002e7\" (UID: \"8c9155e4-f0e0-4edb-a814-7db1466002e7\") " Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125502 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.125797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.126018 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.126101 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c9155e4-f0e0-4edb-a814-7db1466002e7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.177320 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.337666 4766 generic.go:334] "Generic (PLEG): container finished" podID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" exitCode=0 Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338077 4766 generic.go:334] "Generic (PLEG): container finished" podID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" exitCode=2 Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338092 4766 generic.go:334] "Generic (PLEG): container finished" podID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" exitCode=0 Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338101 4766 generic.go:334] "Generic (PLEG): container finished" podID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" exitCode=0 Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338179 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerDied","Data":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerDied","Data":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338366 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerDied","Data":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338469 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerDied","Data":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338495 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c9155e4-f0e0-4edb-a814-7db1466002e7","Type":"ContainerDied","Data":"6bd8014422c1896086174900fdf38208977bacb62378c914bc6a3506c6fb3e4c"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338541 4766 scope.go:117] "RemoveContainer" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.338850 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.342359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cv24v" event={"ID":"8ecbeee8-c61d-4b75-bc60-021d3739e386","Type":"ContainerStarted","Data":"56652e963ae7ac747ef63afabb215b08b272cb197950cbb0e12b43f6c25491c7"} Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.395972 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.396030 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.432326 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:42 crc kubenswrapper[4766]: I0129 11:47:42.445985 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.738464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts" (OuterVolumeSpecName: "scripts") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.742953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv" (OuterVolumeSpecName: "kube-api-access-6hsjv") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "kube-api-access-6hsjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.747523 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.757274 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hsjv\" (UniqueName: \"kubernetes.io/projected/8c9155e4-f0e0-4edb-a814-7db1466002e7-kube-api-access-6hsjv\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.757331 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.757345 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.799903 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.836645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data" (OuterVolumeSpecName: "config-data") pod "8c9155e4-f0e0-4edb-a814-7db1466002e7" (UID: "8c9155e4-f0e0-4edb-a814-7db1466002e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.858845 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.859109 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9155e4-f0e0-4edb-a814-7db1466002e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.913742 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.913777 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.913793 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.913843 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.913915 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.914009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.929964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.930021 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.931710 4766 scope.go:117] "RemoveContainer" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.972170 4766 scope.go:117] "RemoveContainer" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:43 crc kubenswrapper[4766]: I0129 11:47:43.994865 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.006496 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.015302 4766 scope.go:117] "RemoveContainer" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.021434 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.021921 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="proxy-httpd" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.021945 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="proxy-httpd" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.021965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-central-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.021975 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-central-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.021990 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-notification-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.021999 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-notification-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.022096 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="sg-core" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.022104 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="sg-core" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.022325 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="proxy-httpd" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.022350 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-notification-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.022390 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="ceilometer-central-agent" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.022404 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" containerName="sg-core" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.024558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.029831 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.030037 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.035564 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.064044 4766 scope.go:117] "RemoveContainer" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.068561 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": container with ID starting with 11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77 not found: ID does not exist" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.068599 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} err="failed to get container status \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": rpc error: code = NotFound desc = could not find container \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": container with ID starting with 11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.068622 4766 scope.go:117] "RemoveContainer" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.075375 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": container with ID starting with 353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2 not found: ID does not exist" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.075439 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} err="failed to get container status \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": rpc error: code = NotFound desc = could not find container \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": container with ID starting with 353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.075470 4766 scope.go:117] "RemoveContainer" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.078896 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": container with ID starting with d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7 not found: ID does not exist" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.078962 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} err="failed to get container status \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": rpc error: code = NotFound desc = could not find container \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": container with ID starting with d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.078996 4766 scope.go:117] "RemoveContainer" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: E0129 11:47:44.080638 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": container with ID starting with 518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845 not found: ID does not exist" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.080685 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} err="failed to get container status \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": rpc error: code = NotFound desc = could not find container \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": container with ID starting with 518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.080714 4766 scope.go:117] "RemoveContainer" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.081254 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} err="failed to get container status \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": rpc error: code = NotFound desc = could not find container \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": container with ID starting with 11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.081273 4766 scope.go:117] "RemoveContainer" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.081598 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} err="failed to get container status \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": rpc error: code = NotFound desc = could not find container \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": container with ID starting with 353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.081615 4766 scope.go:117] "RemoveContainer" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.082219 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} err="failed to get container status \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": rpc error: code = NotFound desc = could not find container \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": container with ID starting with d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.082265 4766 scope.go:117] "RemoveContainer" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083009 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} err="failed to get container status \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": rpc error: code = NotFound desc = could not find container \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": container with ID starting with 518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083043 4766 scope.go:117] "RemoveContainer" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083305 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} err="failed to get container status \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": rpc error: code = NotFound desc = could not find container \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": container with ID starting with 11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083324 4766 scope.go:117] "RemoveContainer" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083575 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} err="failed to get container status \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": rpc error: code = NotFound desc = could not find container \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": container with ID starting with 353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083598 4766 scope.go:117] "RemoveContainer" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083903 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} err="failed to get container status \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": rpc error: code = NotFound desc = could not find container \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": container with ID starting with d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.083946 4766 scope.go:117] "RemoveContainer" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.085066 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} err="failed to get container status \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": rpc error: code = NotFound desc = could not find container \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": container with ID starting with 518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.099634 4766 scope.go:117] "RemoveContainer" containerID="11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.100797 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77"} err="failed to get container status \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": rpc error: code = NotFound desc = could not find container \"11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77\": container with ID starting with 11e5334420c2a1b9248b9ef54849df0f838d44c864189c794f84805e14898f77 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.100991 4766 scope.go:117] "RemoveContainer" containerID="353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.111554 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2"} err="failed to get container status \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": rpc error: code = NotFound desc = could not find container \"353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2\": container with ID starting with 353b9fade6ecdf5e76523c0fb398df5348a94763ded4ab4fe0587572aa17e1a2 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.111796 4766 scope.go:117] "RemoveContainer" containerID="d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.112342 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7"} err="failed to get container status \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": rpc error: code = NotFound desc = could not find container \"d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7\": container with ID starting with d0acd5f1fbd5f9a5efec13f7d0171b4a80177f7d4ce942590c1d0cd644ea98c7 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.112388 4766 scope.go:117] "RemoveContainer" containerID="518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.112938 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845"} err="failed to get container status \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": rpc error: code = NotFound desc = could not find container \"518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845\": container with ID starting with 518ba0c3fb255242564bc1f7624df98e5fddad11773d48aec17ba9dafcf48845 not found: ID does not exist" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.167941 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.167984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.168000 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.168037 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.168058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.168155 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.168178 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrj44\" (UniqueName: \"kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.269933 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.270307 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.270551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.271064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrj44\" (UniqueName: \"kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.271533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.272277 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.272700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.271005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.272622 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.275431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.277580 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.280372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.281022 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.291856 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrj44\" (UniqueName: \"kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44\") pod \"ceilometer-0\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.369580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.866830 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:44 crc kubenswrapper[4766]: W0129 11:47:44.873093 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod036b4b3e_8fcf_4edd_84d4_f494e3576a2c.slice/crio-cecb650c74452143bc620e38bf6467094636e8cb1f88bd10fe321dae7ca0e98a WatchSource:0}: Error finding container cecb650c74452143bc620e38bf6467094636e8cb1f88bd10fe321dae7ca0e98a: Status 404 returned error can't find the container with id cecb650c74452143bc620e38bf6467094636e8cb1f88bd10fe321dae7ca0e98a Jan 29 11:47:44 crc kubenswrapper[4766]: I0129 11:47:44.973066 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.236602 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9155e4-f0e0-4edb-a814-7db1466002e7" path="/var/lib/kubelet/pods/8c9155e4-f0e0-4edb-a814-7db1466002e7/volumes" Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.381335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerStarted","Data":"cecb650c74452143bc620e38bf6467094636e8cb1f88bd10fe321dae7ca0e98a"} Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.381402 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.381685 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.829406 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:45 crc kubenswrapper[4766]: I0129 11:47:45.833534 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.339010 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.339452 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.362406 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.362467 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.390339 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerStarted","Data":"898af7966c310a87dc7860b63fae1d4d9b40aec63f6d38b28110f06721064a4f"} Jan 29 11:47:46 crc kubenswrapper[4766]: I0129 11:47:46.438790 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:47:47 crc kubenswrapper[4766]: I0129 11:47:47.404671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerStarted","Data":"df868cf5d5758c0f9dc91bebaf01897e91120f29eef576f12597198534d7abda"} Jan 29 11:47:53 crc kubenswrapper[4766]: I0129 11:47:53.465314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cv24v" event={"ID":"8ecbeee8-c61d-4b75-bc60-021d3739e386","Type":"ContainerStarted","Data":"e9d3c086db8be6c2238dd8bc1ca1ec8cf931703d74662e7deb56e597e993e11f"} Jan 29 11:47:53 crc kubenswrapper[4766]: I0129 11:47:53.467742 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerStarted","Data":"c571aad652db5a3373bbf1e8982d07a8d343b650baee0fb854cfd3705a0bd994"} Jan 29 11:47:53 crc kubenswrapper[4766]: I0129 11:47:53.485661 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cv24v" podStartSLOduration=2.459885781 podStartE2EDuration="13.485634517s" podCreationTimestamp="2026-01-29 11:47:40 +0000 UTC" firstStartedPulling="2026-01-29 11:47:41.80391815 +0000 UTC m=+1598.916311161" lastFinishedPulling="2026-01-29 11:47:52.829666886 +0000 UTC m=+1609.942059897" observedRunningTime="2026-01-29 11:47:53.478963472 +0000 UTC m=+1610.591356483" watchObservedRunningTime="2026-01-29 11:47:53.485634517 +0000 UTC m=+1610.598027528" Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.485672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerStarted","Data":"8860a7931ab249ad171b966fd6a459b5bf4a3c55a00c7a0fce68d4407a33dccb"} Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.486169 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.485917 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="sg-core" containerID="cri-o://c571aad652db5a3373bbf1e8982d07a8d343b650baee0fb854cfd3705a0bd994" gracePeriod=30 Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.485916 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="proxy-httpd" containerID="cri-o://8860a7931ab249ad171b966fd6a459b5bf4a3c55a00c7a0fce68d4407a33dccb" gracePeriod=30 Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.485947 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-notification-agent" containerID="cri-o://df868cf5d5758c0f9dc91bebaf01897e91120f29eef576f12597198534d7abda" gracePeriod=30 Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.485918 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-central-agent" containerID="cri-o://898af7966c310a87dc7860b63fae1d4d9b40aec63f6d38b28110f06721064a4f" gracePeriod=30 Jan 29 11:47:55 crc kubenswrapper[4766]: I0129 11:47:55.523712 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.520870386 podStartE2EDuration="12.523691691s" podCreationTimestamp="2026-01-29 11:47:43 +0000 UTC" firstStartedPulling="2026-01-29 11:47:44.8763257 +0000 UTC m=+1601.988718711" lastFinishedPulling="2026-01-29 11:47:54.879147005 +0000 UTC m=+1611.991540016" observedRunningTime="2026-01-29 11:47:55.514194927 +0000 UTC m=+1612.626587938" watchObservedRunningTime="2026-01-29 11:47:55.523691691 +0000 UTC m=+1612.636084702" Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.496295 4766 generic.go:334] "Generic (PLEG): container finished" podID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerID="8860a7931ab249ad171b966fd6a459b5bf4a3c55a00c7a0fce68d4407a33dccb" exitCode=0 Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.497295 4766 generic.go:334] "Generic (PLEG): container finished" podID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerID="c571aad652db5a3373bbf1e8982d07a8d343b650baee0fb854cfd3705a0bd994" exitCode=2 Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.497372 4766 generic.go:334] "Generic (PLEG): container finished" podID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerID="df868cf5d5758c0f9dc91bebaf01897e91120f29eef576f12597198534d7abda" exitCode=0 Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.496456 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerDied","Data":"8860a7931ab249ad171b966fd6a459b5bf4a3c55a00c7a0fce68d4407a33dccb"} Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.497606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerDied","Data":"c571aad652db5a3373bbf1e8982d07a8d343b650baee0fb854cfd3705a0bd994"} Jan 29 11:47:56 crc kubenswrapper[4766]: I0129 11:47:56.497679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerDied","Data":"df868cf5d5758c0f9dc91bebaf01897e91120f29eef576f12597198534d7abda"} Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.515368 4766 generic.go:334] "Generic (PLEG): container finished" podID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerID="898af7966c310a87dc7860b63fae1d4d9b40aec63f6d38b28110f06721064a4f" exitCode=0 Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.516251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerDied","Data":"898af7966c310a87dc7860b63fae1d4d9b40aec63f6d38b28110f06721064a4f"} Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.774078 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrj44\" (UniqueName: \"kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824487 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824686 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.824757 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml\") pod \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\" (UID: \"036b4b3e-8fcf-4edd-84d4-f494e3576a2c\") " Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.827206 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.831176 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts" (OuterVolumeSpecName: "scripts") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.827298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.860733 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.861404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44" (OuterVolumeSpecName: "kube-api-access-rrj44") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "kube-api-access-rrj44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.926979 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.927040 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrj44\" (UniqueName: \"kubernetes.io/projected/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-kube-api-access-rrj44\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.927058 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.927070 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.927082 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.942727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:57 crc kubenswrapper[4766]: I0129 11:47:57.958680 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data" (OuterVolumeSpecName: "config-data") pod "036b4b3e-8fcf-4edd-84d4-f494e3576a2c" (UID: "036b4b3e-8fcf-4edd-84d4-f494e3576a2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.028425 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.028466 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036b4b3e-8fcf-4edd-84d4-f494e3576a2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.527387 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"036b4b3e-8fcf-4edd-84d4-f494e3576a2c","Type":"ContainerDied","Data":"cecb650c74452143bc620e38bf6467094636e8cb1f88bd10fe321dae7ca0e98a"} Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.527472 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.527698 4766 scope.go:117] "RemoveContainer" containerID="8860a7931ab249ad171b966fd6a459b5bf4a3c55a00c7a0fce68d4407a33dccb" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.564533 4766 scope.go:117] "RemoveContainer" containerID="c571aad652db5a3373bbf1e8982d07a8d343b650baee0fb854cfd3705a0bd994" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.567295 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.580035 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.596496 4766 scope.go:117] "RemoveContainer" containerID="df868cf5d5758c0f9dc91bebaf01897e91120f29eef576f12597198534d7abda" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.600993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:58 crc kubenswrapper[4766]: E0129 11:47:58.601477 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="proxy-httpd" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601494 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="proxy-httpd" Jan 29 11:47:58 crc kubenswrapper[4766]: E0129 11:47:58.601508 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-notification-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601516 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-notification-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: E0129 11:47:58.601548 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-central-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-central-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: E0129 11:47:58.601568 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="sg-core" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601575 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="sg-core" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601805 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-notification-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601820 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="sg-core" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601830 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="proxy-httpd" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.601840 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" containerName="ceilometer-central-agent" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.608805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.614421 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.616377 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.618351 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.637145 4766 scope.go:117] "RemoveContainer" containerID="898af7966c310a87dc7860b63fae1d4d9b40aec63f6d38b28110f06721064a4f" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.638910 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639240 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwvf\" (UniqueName: \"kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639367 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.639685 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.741877 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.741939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.741974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmwvf\" (UniqueName: \"kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.742007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.742123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.742147 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.742174 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.743749 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.743800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.747578 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.747823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.748405 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.759930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.762267 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmwvf\" (UniqueName: \"kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf\") pod \"ceilometer-0\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " pod="openstack/ceilometer-0" Jan 29 11:47:58 crc kubenswrapper[4766]: I0129 11:47:58.928603 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:47:59 crc kubenswrapper[4766]: I0129 11:47:59.237505 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="036b4b3e-8fcf-4edd-84d4-f494e3576a2c" path="/var/lib/kubelet/pods/036b4b3e-8fcf-4edd-84d4-f494e3576a2c/volumes" Jan 29 11:47:59 crc kubenswrapper[4766]: I0129 11:47:59.381826 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:47:59 crc kubenswrapper[4766]: W0129 11:47:59.383980 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod918e093d_d6df_472e_b2cc_d1951d07122e.slice/crio-99ff2e254886aea61752a26baf3b68a04e04e4cbf18fc31022e813f749dd1523 WatchSource:0}: Error finding container 99ff2e254886aea61752a26baf3b68a04e04e4cbf18fc31022e813f749dd1523: Status 404 returned error can't find the container with id 99ff2e254886aea61752a26baf3b68a04e04e4cbf18fc31022e813f749dd1523 Jan 29 11:47:59 crc kubenswrapper[4766]: I0129 11:47:59.538571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerStarted","Data":"99ff2e254886aea61752a26baf3b68a04e04e4cbf18fc31022e813f749dd1523"} Jan 29 11:48:00 crc kubenswrapper[4766]: I0129 11:48:00.550875 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerStarted","Data":"491258a763f89d6a553d0ca24d5585fd15b3f994ad574051847cc3f37fe1795d"} Jan 29 11:48:01 crc kubenswrapper[4766]: I0129 11:48:01.572748 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerStarted","Data":"0bd3701b0fc04f2f6af12a529b0383e7c6ba65ac58885a960d0522e02123d03d"} Jan 29 11:48:02 crc kubenswrapper[4766]: I0129 11:48:02.585163 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerStarted","Data":"28d1a0f6b685ec0c1db62c04ee015ff07f3bba4d9a5d9b4f2fa4ebded079e855"} Jan 29 11:48:03 crc kubenswrapper[4766]: I0129 11:48:03.596847 4766 generic.go:334] "Generic (PLEG): container finished" podID="8ecbeee8-c61d-4b75-bc60-021d3739e386" containerID="e9d3c086db8be6c2238dd8bc1ca1ec8cf931703d74662e7deb56e597e993e11f" exitCode=0 Jan 29 11:48:03 crc kubenswrapper[4766]: I0129 11:48:03.596990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cv24v" event={"ID":"8ecbeee8-c61d-4b75-bc60-021d3739e386","Type":"ContainerDied","Data":"e9d3c086db8be6c2238dd8bc1ca1ec8cf931703d74662e7deb56e597e993e11f"} Jan 29 11:48:04 crc kubenswrapper[4766]: I0129 11:48:04.609465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerStarted","Data":"c7d063e77079dc0f9b0ecc7b6f91714548ca83ac397bb13c73de527c269488c6"} Jan 29 11:48:04 crc kubenswrapper[4766]: I0129 11:48:04.609832 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:48:04 crc kubenswrapper[4766]: I0129 11:48:04.644266 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.300627959 podStartE2EDuration="6.644248856s" podCreationTimestamp="2026-01-29 11:47:58 +0000 UTC" firstStartedPulling="2026-01-29 11:47:59.388715975 +0000 UTC m=+1616.501108996" lastFinishedPulling="2026-01-29 11:48:03.732336882 +0000 UTC m=+1620.844729893" observedRunningTime="2026-01-29 11:48:04.630227387 +0000 UTC m=+1621.742620408" watchObservedRunningTime="2026-01-29 11:48:04.644248856 +0000 UTC m=+1621.756641867" Jan 29 11:48:04 crc kubenswrapper[4766]: I0129 11:48:04.950892 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.070817 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle\") pod \"8ecbeee8-c61d-4b75-bc60-021d3739e386\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.070946 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data\") pod \"8ecbeee8-c61d-4b75-bc60-021d3739e386\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.071063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts\") pod \"8ecbeee8-c61d-4b75-bc60-021d3739e386\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.071111 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxkm7\" (UniqueName: \"kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7\") pod \"8ecbeee8-c61d-4b75-bc60-021d3739e386\" (UID: \"8ecbeee8-c61d-4b75-bc60-021d3739e386\") " Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.079763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts" (OuterVolumeSpecName: "scripts") pod "8ecbeee8-c61d-4b75-bc60-021d3739e386" (UID: "8ecbeee8-c61d-4b75-bc60-021d3739e386"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.094176 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7" (OuterVolumeSpecName: "kube-api-access-xxkm7") pod "8ecbeee8-c61d-4b75-bc60-021d3739e386" (UID: "8ecbeee8-c61d-4b75-bc60-021d3739e386"). InnerVolumeSpecName "kube-api-access-xxkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.099023 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ecbeee8-c61d-4b75-bc60-021d3739e386" (UID: "8ecbeee8-c61d-4b75-bc60-021d3739e386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.100836 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data" (OuterVolumeSpecName: "config-data") pod "8ecbeee8-c61d-4b75-bc60-021d3739e386" (UID: "8ecbeee8-c61d-4b75-bc60-021d3739e386"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.173195 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxkm7\" (UniqueName: \"kubernetes.io/projected/8ecbeee8-c61d-4b75-bc60-021d3739e386-kube-api-access-xxkm7\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.173235 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.173248 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.173259 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ecbeee8-c61d-4b75-bc60-021d3739e386-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.622291 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cv24v" event={"ID":"8ecbeee8-c61d-4b75-bc60-021d3739e386","Type":"ContainerDied","Data":"56652e963ae7ac747ef63afabb215b08b272cb197950cbb0e12b43f6c25491c7"} Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.622618 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56652e963ae7ac747ef63afabb215b08b272cb197950cbb0e12b43f6c25491c7" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.622316 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cv24v" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.722883 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:05 crc kubenswrapper[4766]: E0129 11:48:05.723326 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecbeee8-c61d-4b75-bc60-021d3739e386" containerName="nova-cell0-conductor-db-sync" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.723353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecbeee8-c61d-4b75-bc60-021d3739e386" containerName="nova-cell0-conductor-db-sync" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.723645 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecbeee8-c61d-4b75-bc60-021d3739e386" containerName="nova-cell0-conductor-db-sync" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.724368 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.726843 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7mg9r" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.726911 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.740727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.783553 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.783863 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlx8p\" (UniqueName: \"kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.783937 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.885709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.885828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.885939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlx8p\" (UniqueName: \"kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.891321 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.897124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:05 crc kubenswrapper[4766]: I0129 11:48:05.904113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlx8p\" (UniqueName: \"kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p\") pod \"nova-cell0-conductor-0\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:06 crc kubenswrapper[4766]: I0129 11:48:06.046372 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:06 crc kubenswrapper[4766]: I0129 11:48:06.369627 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:06 crc kubenswrapper[4766]: I0129 11:48:06.633697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0992680a-4d88-4760-bead-37ef181a5992","Type":"ContainerStarted","Data":"b6e8edb3d1e75ea089fc7059dc90a98e0da9cad459d89253efde24a2f6f43312"} Jan 29 11:48:07 crc kubenswrapper[4766]: I0129 11:48:07.642361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0992680a-4d88-4760-bead-37ef181a5992","Type":"ContainerStarted","Data":"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600"} Jan 29 11:48:07 crc kubenswrapper[4766]: I0129 11:48:07.642665 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:07 crc kubenswrapper[4766]: I0129 11:48:07.666655 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.666634988 podStartE2EDuration="2.666634988s" podCreationTimestamp="2026-01-29 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:07.659390988 +0000 UTC m=+1624.771783999" watchObservedRunningTime="2026-01-29 11:48:07.666634988 +0000 UTC m=+1624.779028009" Jan 29 11:48:08 crc kubenswrapper[4766]: I0129 11:48:08.657554 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:09 crc kubenswrapper[4766]: I0129 11:48:09.668645 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="0992680a-4d88-4760-bead-37ef181a5992" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600" gracePeriod=30 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.321158 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.321436 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-central-agent" containerID="cri-o://491258a763f89d6a553d0ca24d5585fd15b3f994ad574051847cc3f37fe1795d" gracePeriod=30 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.321536 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="proxy-httpd" containerID="cri-o://c7d063e77079dc0f9b0ecc7b6f91714548ca83ac397bb13c73de527c269488c6" gracePeriod=30 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.321510 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-notification-agent" containerID="cri-o://0bd3701b0fc04f2f6af12a529b0383e7c6ba65ac58885a960d0522e02123d03d" gracePeriod=30 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.321536 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="sg-core" containerID="cri-o://28d1a0f6b685ec0c1db62c04ee015ff07f3bba4d9a5d9b4f2fa4ebded079e855" gracePeriod=30 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.504107 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.585620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle\") pod \"0992680a-4d88-4760-bead-37ef181a5992\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.586050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data\") pod \"0992680a-4d88-4760-bead-37ef181a5992\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.586312 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlx8p\" (UniqueName: \"kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p\") pod \"0992680a-4d88-4760-bead-37ef181a5992\" (UID: \"0992680a-4d88-4760-bead-37ef181a5992\") " Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.592875 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p" (OuterVolumeSpecName: "kube-api-access-tlx8p") pod "0992680a-4d88-4760-bead-37ef181a5992" (UID: "0992680a-4d88-4760-bead-37ef181a5992"). InnerVolumeSpecName "kube-api-access-tlx8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.618218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data" (OuterVolumeSpecName: "config-data") pod "0992680a-4d88-4760-bead-37ef181a5992" (UID: "0992680a-4d88-4760-bead-37ef181a5992"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.633569 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0992680a-4d88-4760-bead-37ef181a5992" (UID: "0992680a-4d88-4760-bead-37ef181a5992"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.683634 4766 generic.go:334] "Generic (PLEG): container finished" podID="0992680a-4d88-4760-bead-37ef181a5992" containerID="d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600" exitCode=0 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.683677 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.683729 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0992680a-4d88-4760-bead-37ef181a5992","Type":"ContainerDied","Data":"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600"} Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.683782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0992680a-4d88-4760-bead-37ef181a5992","Type":"ContainerDied","Data":"b6e8edb3d1e75ea089fc7059dc90a98e0da9cad459d89253efde24a2f6f43312"} Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.683801 4766 scope.go:117] "RemoveContainer" containerID="d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.688325 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlx8p\" (UniqueName: \"kubernetes.io/projected/0992680a-4d88-4760-bead-37ef181a5992-kube-api-access-tlx8p\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.688348 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.688358 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0992680a-4d88-4760-bead-37ef181a5992-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.689112 4766 generic.go:334] "Generic (PLEG): container finished" podID="918e093d-d6df-472e-b2cc-d1951d07122e" containerID="c7d063e77079dc0f9b0ecc7b6f91714548ca83ac397bb13c73de527c269488c6" exitCode=0 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.689162 4766 generic.go:334] "Generic (PLEG): container finished" podID="918e093d-d6df-472e-b2cc-d1951d07122e" containerID="28d1a0f6b685ec0c1db62c04ee015ff07f3bba4d9a5d9b4f2fa4ebded079e855" exitCode=2 Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.689186 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerDied","Data":"c7d063e77079dc0f9b0ecc7b6f91714548ca83ac397bb13c73de527c269488c6"} Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.689216 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerDied","Data":"28d1a0f6b685ec0c1db62c04ee015ff07f3bba4d9a5d9b4f2fa4ebded079e855"} Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.738992 4766 scope.go:117] "RemoveContainer" containerID="d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.739124 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:10 crc kubenswrapper[4766]: E0129 11:48:10.741110 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600\": container with ID starting with d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600 not found: ID does not exist" containerID="d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.741198 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600"} err="failed to get container status \"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600\": rpc error: code = NotFound desc = could not find container \"d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600\": container with ID starting with d77ebc57f29a1383fe048bc2ff14d8d149bf65cfe05eb720145e17367fd65600 not found: ID does not exist" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.747929 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.774700 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:10 crc kubenswrapper[4766]: E0129 11:48:10.775267 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0992680a-4d88-4760-bead-37ef181a5992" containerName="nova-cell0-conductor-conductor" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.775306 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0992680a-4d88-4760-bead-37ef181a5992" containerName="nova-cell0-conductor-conductor" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.775572 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0992680a-4d88-4760-bead-37ef181a5992" containerName="nova-cell0-conductor-conductor" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.776293 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.779087 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7mg9r" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.779438 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.784759 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.892589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5s7\" (UniqueName: \"kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.892660 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.892714 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.994192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5s7\" (UniqueName: \"kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.994497 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.994627 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.998207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:10 crc kubenswrapper[4766]: I0129 11:48:10.998309 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.008987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5s7\" (UniqueName: \"kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7\") pod \"nova-cell0-conductor-0\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.092388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.235071 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0992680a-4d88-4760-bead-37ef181a5992" path="/var/lib/kubelet/pods/0992680a-4d88-4760-bead-37ef181a5992/volumes" Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.546933 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.700051 4766 generic.go:334] "Generic (PLEG): container finished" podID="918e093d-d6df-472e-b2cc-d1951d07122e" containerID="491258a763f89d6a553d0ca24d5585fd15b3f994ad574051847cc3f37fe1795d" exitCode=0 Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.700108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerDied","Data":"491258a763f89d6a553d0ca24d5585fd15b3f994ad574051847cc3f37fe1795d"} Jan 29 11:48:11 crc kubenswrapper[4766]: I0129 11:48:11.702113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7245aebe-fe32-42fc-a489-c38b15bb4308","Type":"ContainerStarted","Data":"56dfdc813b8eb062b0e7e1f06ffe05c412ada4815e1d237ac127cb390912981c"} Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.715505 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7245aebe-fe32-42fc-a489-c38b15bb4308","Type":"ContainerStarted","Data":"975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c"} Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.716108 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.721801 4766 generic.go:334] "Generic (PLEG): container finished" podID="918e093d-d6df-472e-b2cc-d1951d07122e" containerID="0bd3701b0fc04f2f6af12a529b0383e7c6ba65ac58885a960d0522e02123d03d" exitCode=0 Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.721849 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerDied","Data":"0bd3701b0fc04f2f6af12a529b0383e7c6ba65ac58885a960d0522e02123d03d"} Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.741023 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7410054390000003 podStartE2EDuration="2.741005439s" podCreationTimestamp="2026-01-29 11:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:12.732484825 +0000 UTC m=+1629.844877836" watchObservedRunningTime="2026-01-29 11:48:12.741005439 +0000 UTC m=+1629.853398450" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.910488 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.937695 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.937788 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmwvf\" (UniqueName: \"kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.937910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.938753 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.938972 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.939529 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.939619 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.939699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.939776 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml\") pod \"918e093d-d6df-472e-b2cc-d1951d07122e\" (UID: \"918e093d-d6df-472e-b2cc-d1951d07122e\") " Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.941522 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.941552 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918e093d-d6df-472e-b2cc-d1951d07122e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.943935 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf" (OuterVolumeSpecName: "kube-api-access-vmwvf") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "kube-api-access-vmwvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.945050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts" (OuterVolumeSpecName: "scripts") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:12 crc kubenswrapper[4766]: I0129 11:48:12.981521 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.019185 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.037763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data" (OuterVolumeSpecName: "config-data") pod "918e093d-d6df-472e-b2cc-d1951d07122e" (UID: "918e093d-d6df-472e-b2cc-d1951d07122e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.042975 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.043009 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmwvf\" (UniqueName: \"kubernetes.io/projected/918e093d-d6df-472e-b2cc-d1951d07122e-kube-api-access-vmwvf\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.043020 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.043029 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.043038 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918e093d-d6df-472e-b2cc-d1951d07122e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.735967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918e093d-d6df-472e-b2cc-d1951d07122e","Type":"ContainerDied","Data":"99ff2e254886aea61752a26baf3b68a04e04e4cbf18fc31022e813f749dd1523"} Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.736039 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.736039 4766 scope.go:117] "RemoveContainer" containerID="c7d063e77079dc0f9b0ecc7b6f91714548ca83ac397bb13c73de527c269488c6" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.761643 4766 scope.go:117] "RemoveContainer" containerID="28d1a0f6b685ec0c1db62c04ee015ff07f3bba4d9a5d9b4f2fa4ebded079e855" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.768657 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.785008 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.785668 4766 scope.go:117] "RemoveContainer" containerID="0bd3701b0fc04f2f6af12a529b0383e7c6ba65ac58885a960d0522e02123d03d" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.799323 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:13 crc kubenswrapper[4766]: E0129 11:48:13.800024 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="sg-core" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800048 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="sg-core" Jan 29 11:48:13 crc kubenswrapper[4766]: E0129 11:48:13.800073 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-central-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800082 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-central-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: E0129 11:48:13.800096 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-notification-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800216 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-notification-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: E0129 11:48:13.800239 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="proxy-httpd" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800248 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="proxy-httpd" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800482 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-notification-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800507 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="proxy-httpd" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800521 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="ceilometer-central-agent" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.800533 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" containerName="sg-core" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.804548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.806396 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.807483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.830650 4766 scope.go:117] "RemoveContainer" containerID="491258a763f89d6a553d0ca24d5585fd15b3f994ad574051847cc3f37fe1795d" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.841302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861178 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861230 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6d2t\" (UniqueName: \"kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861434 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.861488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963180 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963206 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.963908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.964241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.964323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6d2t\" (UniqueName: \"kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.968934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.969176 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.969194 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.971262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:13 crc kubenswrapper[4766]: I0129 11:48:13.981562 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6d2t\" (UniqueName: \"kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t\") pod \"ceilometer-0\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " pod="openstack/ceilometer-0" Jan 29 11:48:14 crc kubenswrapper[4766]: I0129 11:48:14.127764 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:14 crc kubenswrapper[4766]: I0129 11:48:14.580160 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:14 crc kubenswrapper[4766]: W0129 11:48:14.587167 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff8bd7d_54dd_4551_b4cc_ba2937c82324.slice/crio-85534ecba83134ab364bfd51817ee8e8a8da1eb9e7d096bc9c0befd134ef37f3 WatchSource:0}: Error finding container 85534ecba83134ab364bfd51817ee8e8a8da1eb9e7d096bc9c0befd134ef37f3: Status 404 returned error can't find the container with id 85534ecba83134ab364bfd51817ee8e8a8da1eb9e7d096bc9c0befd134ef37f3 Jan 29 11:48:14 crc kubenswrapper[4766]: I0129 11:48:14.747492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerStarted","Data":"85534ecba83134ab364bfd51817ee8e8a8da1eb9e7d096bc9c0befd134ef37f3"} Jan 29 11:48:15 crc kubenswrapper[4766]: I0129 11:48:15.251691 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918e093d-d6df-472e-b2cc-d1951d07122e" path="/var/lib/kubelet/pods/918e093d-d6df-472e-b2cc-d1951d07122e/volumes" Jan 29 11:48:15 crc kubenswrapper[4766]: I0129 11:48:15.759148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerStarted","Data":"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429"} Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.129400 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.361738 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.362078 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.362134 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.363227 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.363291 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" gracePeriod=600 Jan 29 11:48:16 crc kubenswrapper[4766]: E0129 11:48:16.487503 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.661664 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lmqls"] Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.663338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.664937 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.665538 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.678492 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lmqls"] Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.722903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.723281 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.723351 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.723429 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcf8\" (UniqueName: \"kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.806276 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" exitCode=0 Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.806359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d"} Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.806392 4766 scope.go:117] "RemoveContainer" containerID="bb57735502b7ce72125607b2636513bf8e24464584c5c5d20047f0fe3c421130" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.807043 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:48:16 crc kubenswrapper[4766]: E0129 11:48:16.807389 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.827717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.827769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.827825 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.827873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvcf8\" (UniqueName: \"kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.840028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.841464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerStarted","Data":"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12"} Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.842366 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.848100 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.872147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvcf8\" (UniqueName: \"kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8\") pod \"nova-cell0-cell-mapping-lmqls\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.899511 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.901740 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.917005 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.946746 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85l94\" (UniqueName: \"kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.946820 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.946888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.946921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.961390 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.991479 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.993046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:16 crc kubenswrapper[4766]: I0129 11:48:16.994443 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.000306 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.040785 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.052965 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5v8b\" (UniqueName: \"kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85l94\" (UniqueName: \"kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.053766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.056069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.077464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.078236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.117301 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85l94\" (UniqueName: \"kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94\") pod \"nova-api-0\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.122695 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.132738 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.137181 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.160696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.160760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5v8b\" (UniqueName: \"kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.160883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.181656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.187208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.193907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.212523 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5v8b\" (UniqueName: \"kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b\") pod \"nova-cell1-novncproxy-0\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.261959 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.262013 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.262065 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vs8\" (UniqueName: \"kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.276197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.284758 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.288038 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.288155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.293693 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.320234 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.366058 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.370996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlz7\" (UniqueName: \"kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374784 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5vs8\" (UniqueName: \"kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.374996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.383283 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.411448 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5vs8\" (UniqueName: \"kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.417131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data\") pod \"nova-scheduler-0\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480040 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x82c\" (UniqueName: \"kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480648 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480715 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.480806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlz7\" (UniqueName: \"kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.481184 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.483352 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.492631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.501188 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.505961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlz7\" (UniqueName: \"kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7\") pod \"nova-metadata-0\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.562167 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583611 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.583854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x82c\" (UniqueName: \"kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.585455 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.585473 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.585540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.586357 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.587740 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.604135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x82c\" (UniqueName: \"kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c\") pod \"dnsmasq-dns-bccf8f775-7t62v\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.611502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.714757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.859142 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lmqls"] Jan 29 11:48:17 crc kubenswrapper[4766]: I0129 11:48:17.889046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerStarted","Data":"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.003086 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.110149 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:18 crc kubenswrapper[4766]: W0129 11:48:18.128054 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa981e86_43ce_42d8_973a_abc1097dd8c9.slice/crio-684e1c44c276befaf71e1913f291874864293d79d54f0d8a9171bef04d144da9 WatchSource:0}: Error finding container 684e1c44c276befaf71e1913f291874864293d79d54f0d8a9171bef04d144da9: Status 404 returned error can't find the container with id 684e1c44c276befaf71e1913f291874864293d79d54f0d8a9171bef04d144da9 Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.220608 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5zsbb"] Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.221767 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.225270 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.225440 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.233394 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5zsbb"] Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.269368 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:18 crc kubenswrapper[4766]: W0129 11:48:18.279912 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6bb8ae_937c_43d9_a1ce_9db3a125ac74.slice/crio-2e25ee0cabe9c4d3f0ab19c111587e0f70c8aea9f5e215580ba2ec3a186cd33b WatchSource:0}: Error finding container 2e25ee0cabe9c4d3f0ab19c111587e0f70c8aea9f5e215580ba2ec3a186cd33b: Status 404 returned error can't find the container with id 2e25ee0cabe9c4d3f0ab19c111587e0f70c8aea9f5e215580ba2ec3a186cd33b Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.293042 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.298149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh8bg\" (UniqueName: \"kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.298229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.298288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.298487 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.400089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh8bg\" (UniqueName: \"kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.400155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.400208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.400307 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.405135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.405697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.408933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.422173 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh8bg\" (UniqueName: \"kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg\") pod \"nova-cell1-conductor-db-sync-5zsbb\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.449012 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.543108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.919775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f103e36-e206-4730-b776-9d9f0bc9b264","Type":"ContainerStarted","Data":"bbcfda01f63a3d3640585234256fa765baf7df10d27fa5d01818d6b79eda4554"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.923690 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lmqls" event={"ID":"75249388-3798-4187-b09f-2e2bdfb0fd85","Type":"ContainerStarted","Data":"a705a413907ef327b1f4edf40f66d732da2bb972954ab9c5d54ac904827256ea"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.923715 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lmqls" event={"ID":"75249388-3798-4187-b09f-2e2bdfb0fd85","Type":"ContainerStarted","Data":"b3ffeed11b51e4afd51f385fc4108e6b8d7119be2bee80973941a12dd7f3d28e"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.938984 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lmqls" podStartSLOduration=2.938968858 podStartE2EDuration="2.938968858s" podCreationTimestamp="2026-01-29 11:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:18.938790084 +0000 UTC m=+1636.051183095" watchObservedRunningTime="2026-01-29 11:48:18.938968858 +0000 UTC m=+1636.051361869" Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.943080 4766 generic.go:334] "Generic (PLEG): container finished" podID="1f8d0686-f650-46fa-a71e-035021e90814" containerID="6f7ca2a6d21d626cc1b52a2b3fd238045ab39b3e93dfc5fe14f09b4743f77fdc" exitCode=0 Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.943125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" event={"ID":"1f8d0686-f650-46fa-a71e-035021e90814","Type":"ContainerDied","Data":"6f7ca2a6d21d626cc1b52a2b3fd238045ab39b3e93dfc5fe14f09b4743f77fdc"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.943160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" event={"ID":"1f8d0686-f650-46fa-a71e-035021e90814","Type":"ContainerStarted","Data":"f1bda9da20f1d31f642bb02d01e92ba818d809ed6daddcd0802e220b5291fee5"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.944659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"249e533d-989c-4441-875f-23c15d261e83","Type":"ContainerStarted","Data":"5b9359b54269a77ffcd341acf6e92c1bf351341b44a1df75f9cf64fe5829e941"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.951170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerStarted","Data":"2e25ee0cabe9c4d3f0ab19c111587e0f70c8aea9f5e215580ba2ec3a186cd33b"} Jan 29 11:48:18 crc kubenswrapper[4766]: I0129 11:48:18.956156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerStarted","Data":"684e1c44c276befaf71e1913f291874864293d79d54f0d8a9171bef04d144da9"} Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.155815 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5zsbb"] Jan 29 11:48:19 crc kubenswrapper[4766]: W0129 11:48:19.174017 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f68f3b0_f008_4a27_a250_9efe5bdf5fa0.slice/crio-683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c WatchSource:0}: Error finding container 683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c: Status 404 returned error can't find the container with id 683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.970084 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" event={"ID":"1f8d0686-f650-46fa-a71e-035021e90814","Type":"ContainerStarted","Data":"34c1031d9864c76d7e48c80936da3c90b418e4c5bde5657d1d22a877c2f13a8c"} Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.970657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.974699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" event={"ID":"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0","Type":"ContainerStarted","Data":"bb85acb54b614f03184470afb4304b466f98b4b04565bd127f8ffdb642c19047"} Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.974740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" event={"ID":"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0","Type":"ContainerStarted","Data":"683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c"} Jan 29 11:48:19 crc kubenswrapper[4766]: I0129 11:48:19.995716 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" podStartSLOduration=2.9956998070000003 podStartE2EDuration="2.995699807s" podCreationTimestamp="2026-01-29 11:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:19.992064331 +0000 UTC m=+1637.104457352" watchObservedRunningTime="2026-01-29 11:48:19.995699807 +0000 UTC m=+1637.108092818" Jan 29 11:48:20 crc kubenswrapper[4766]: I0129 11:48:20.016197 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" podStartSLOduration=2.016178756 podStartE2EDuration="2.016178756s" podCreationTimestamp="2026-01-29 11:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:20.008799832 +0000 UTC m=+1637.121192843" watchObservedRunningTime="2026-01-29 11:48:20.016178756 +0000 UTC m=+1637.128571757" Jan 29 11:48:20 crc kubenswrapper[4766]: I0129 11:48:20.629129 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:20 crc kubenswrapper[4766]: I0129 11:48:20.651654 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.005071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"249e533d-989c-4441-875f-23c15d261e83","Type":"ContainerStarted","Data":"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.005135 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="249e533d-989c-4441-875f-23c15d261e83" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d" gracePeriod=30 Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.009203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerStarted","Data":"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.009432 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerStarted","Data":"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.009453 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-metadata" containerID="cri-o://d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" gracePeriod=30 Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.009348 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-log" containerID="cri-o://f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" gracePeriod=30 Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.019856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerStarted","Data":"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.019918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerStarted","Data":"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.024727 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f103e36-e206-4730-b776-9d9f0bc9b264","Type":"ContainerStarted","Data":"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.046968 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerStarted","Data":"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f"} Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.050235 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.053609 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.424010072 podStartE2EDuration="7.053587314s" podCreationTimestamp="2026-01-29 11:48:16 +0000 UTC" firstStartedPulling="2026-01-29 11:48:18.011233758 +0000 UTC m=+1635.123626769" lastFinishedPulling="2026-01-29 11:48:21.640811 +0000 UTC m=+1638.753204011" observedRunningTime="2026-01-29 11:48:23.023091171 +0000 UTC m=+1640.135484182" watchObservedRunningTime="2026-01-29 11:48:23.053587314 +0000 UTC m=+1640.165980325" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.082480 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6982582429999997 podStartE2EDuration="6.082458325s" podCreationTimestamp="2026-01-29 11:48:17 +0000 UTC" firstStartedPulling="2026-01-29 11:48:18.286995019 +0000 UTC m=+1635.399388030" lastFinishedPulling="2026-01-29 11:48:21.671195101 +0000 UTC m=+1638.783588112" observedRunningTime="2026-01-29 11:48:23.050985166 +0000 UTC m=+1640.163378197" watchObservedRunningTime="2026-01-29 11:48:23.082458325 +0000 UTC m=+1640.194851336" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.086674 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.560237199 podStartE2EDuration="7.086657165s" podCreationTimestamp="2026-01-29 11:48:16 +0000 UTC" firstStartedPulling="2026-01-29 11:48:18.130486808 +0000 UTC m=+1635.242879819" lastFinishedPulling="2026-01-29 11:48:21.656906774 +0000 UTC m=+1638.769299785" observedRunningTime="2026-01-29 11:48:23.074999968 +0000 UTC m=+1640.187392979" watchObservedRunningTime="2026-01-29 11:48:23.086657165 +0000 UTC m=+1640.199050176" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.117186 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.057201309 podStartE2EDuration="10.117162789s" podCreationTimestamp="2026-01-29 11:48:13 +0000 UTC" firstStartedPulling="2026-01-29 11:48:14.591538202 +0000 UTC m=+1631.703931213" lastFinishedPulling="2026-01-29 11:48:21.651499682 +0000 UTC m=+1638.763892693" observedRunningTime="2026-01-29 11:48:23.099213786 +0000 UTC m=+1640.211606817" watchObservedRunningTime="2026-01-29 11:48:23.117162789 +0000 UTC m=+1640.229555800" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.131300 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.790012089 podStartE2EDuration="6.13128151s" podCreationTimestamp="2026-01-29 11:48:17 +0000 UTC" firstStartedPulling="2026-01-29 11:48:18.301551552 +0000 UTC m=+1635.413944563" lastFinishedPulling="2026-01-29 11:48:21.642820973 +0000 UTC m=+1638.755213984" observedRunningTime="2026-01-29 11:48:23.123836624 +0000 UTC m=+1640.236229635" watchObservedRunningTime="2026-01-29 11:48:23.13128151 +0000 UTC m=+1640.243674521" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.618078 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.815654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlz7\" (UniqueName: \"kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7\") pod \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.815983 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle\") pod \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.816044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data\") pod \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.816069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs\") pod \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\" (UID: \"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74\") " Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.817926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs" (OuterVolumeSpecName: "logs") pod "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" (UID: "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.824570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7" (OuterVolumeSpecName: "kube-api-access-9xlz7") pod "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" (UID: "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74"). InnerVolumeSpecName "kube-api-access-9xlz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.845294 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data" (OuterVolumeSpecName: "config-data") pod "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" (UID: "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.858176 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" (UID: "4f6bb8ae-937c-43d9-a1ce-9db3a125ac74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.918397 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.918724 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.918808 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlz7\" (UniqueName: \"kubernetes.io/projected/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-kube-api-access-9xlz7\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:23 crc kubenswrapper[4766]: I0129 11:48:23.918896 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.056646 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerID="d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" exitCode=0 Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.056678 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerID="f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" exitCode=143 Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.057716 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.058565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerDied","Data":"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df"} Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.058614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerDied","Data":"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be"} Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.058629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6bb8ae-937c-43d9-a1ce-9db3a125ac74","Type":"ContainerDied","Data":"2e25ee0cabe9c4d3f0ab19c111587e0f70c8aea9f5e215580ba2ec3a186cd33b"} Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.058648 4766 scope.go:117] "RemoveContainer" containerID="d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.091249 4766 scope.go:117] "RemoveContainer" containerID="f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.093498 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.105913 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.113852 4766 scope.go:117] "RemoveContainer" containerID="d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" Jan 29 11:48:24 crc kubenswrapper[4766]: E0129 11:48:24.114261 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df\": container with ID starting with d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df not found: ID does not exist" containerID="d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.114300 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df"} err="failed to get container status \"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df\": rpc error: code = NotFound desc = could not find container \"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df\": container with ID starting with d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df not found: ID does not exist" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.114326 4766 scope.go:117] "RemoveContainer" containerID="f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" Jan 29 11:48:24 crc kubenswrapper[4766]: E0129 11:48:24.114761 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be\": container with ID starting with f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be not found: ID does not exist" containerID="f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.114784 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be"} err="failed to get container status \"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be\": rpc error: code = NotFound desc = could not find container \"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be\": container with ID starting with f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be not found: ID does not exist" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.114800 4766 scope.go:117] "RemoveContainer" containerID="d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.115145 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df"} err="failed to get container status \"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df\": rpc error: code = NotFound desc = could not find container \"d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df\": container with ID starting with d0693ed89baf05d69635bce26e4c5eb207eddd3f5dea55f49a459244b40305df not found: ID does not exist" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.115165 4766 scope.go:117] "RemoveContainer" containerID="f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.115629 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be"} err="failed to get container status \"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be\": rpc error: code = NotFound desc = could not find container \"f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be\": container with ID starting with f5ce43bd560492199b757154116f634b603701a7806cb85843c5f7450af0c1be not found: ID does not exist" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.116582 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:24 crc kubenswrapper[4766]: E0129 11:48:24.117035 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-log" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.117051 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-log" Jan 29 11:48:24 crc kubenswrapper[4766]: E0129 11:48:24.117079 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-metadata" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.117085 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-metadata" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.117241 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-metadata" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.117264 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" containerName="nova-metadata-log" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.118181 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.121281 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.130970 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.135826 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.223836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.223888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.223975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.224086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.224148 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcsk\" (UniqueName: \"kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.326206 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.326289 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.326404 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.326460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwcsk\" (UniqueName: \"kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.326511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.328686 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.332044 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.343217 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.343250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.353861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwcsk\" (UniqueName: \"kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk\") pod \"nova-metadata-0\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.451134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:24 crc kubenswrapper[4766]: I0129 11:48:24.891095 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:25 crc kubenswrapper[4766]: I0129 11:48:25.070803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerStarted","Data":"dc7560590adb50344b44cb35aeee089b9f9ffbe1efd952d3a22239e5f8a9b89d"} Jan 29 11:48:25 crc kubenswrapper[4766]: I0129 11:48:25.239871 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6bb8ae-937c-43d9-a1ce-9db3a125ac74" path="/var/lib/kubelet/pods/4f6bb8ae-937c-43d9-a1ce-9db3a125ac74/volumes" Jan 29 11:48:26 crc kubenswrapper[4766]: I0129 11:48:26.083336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerStarted","Data":"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30"} Jan 29 11:48:26 crc kubenswrapper[4766]: I0129 11:48:26.083687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerStarted","Data":"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f"} Jan 29 11:48:26 crc kubenswrapper[4766]: I0129 11:48:26.085062 4766 generic.go:334] "Generic (PLEG): container finished" podID="75249388-3798-4187-b09f-2e2bdfb0fd85" containerID="a705a413907ef327b1f4edf40f66d732da2bb972954ab9c5d54ac904827256ea" exitCode=0 Jan 29 11:48:26 crc kubenswrapper[4766]: I0129 11:48:26.085093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lmqls" event={"ID":"75249388-3798-4187-b09f-2e2bdfb0fd85","Type":"ContainerDied","Data":"a705a413907ef327b1f4edf40f66d732da2bb972954ab9c5d54ac904827256ea"} Jan 29 11:48:26 crc kubenswrapper[4766]: I0129 11:48:26.109039 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.109019577 podStartE2EDuration="2.109019577s" podCreationTimestamp="2026-01-29 11:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:26.097591206 +0000 UTC m=+1643.209984217" watchObservedRunningTime="2026-01-29 11:48:26.109019577 +0000 UTC m=+1643.221412588" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.096850 4766 generic.go:334] "Generic (PLEG): container finished" podID="2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" containerID="bb85acb54b614f03184470afb4304b466f98b4b04565bd127f8ffdb642c19047" exitCode=0 Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.096964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" event={"ID":"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0","Type":"ContainerDied","Data":"bb85acb54b614f03184470afb4304b466f98b4b04565bd127f8ffdb642c19047"} Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.224625 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:48:27 crc kubenswrapper[4766]: E0129 11:48:27.224839 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.278083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.278131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.321793 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.418412 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.503269 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle\") pod \"75249388-3798-4187-b09f-2e2bdfb0fd85\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.503377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data\") pod \"75249388-3798-4187-b09f-2e2bdfb0fd85\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.503468 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvcf8\" (UniqueName: \"kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8\") pod \"75249388-3798-4187-b09f-2e2bdfb0fd85\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.503498 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts\") pod \"75249388-3798-4187-b09f-2e2bdfb0fd85\" (UID: \"75249388-3798-4187-b09f-2e2bdfb0fd85\") " Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.521891 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8" (OuterVolumeSpecName: "kube-api-access-tvcf8") pod "75249388-3798-4187-b09f-2e2bdfb0fd85" (UID: "75249388-3798-4187-b09f-2e2bdfb0fd85"). InnerVolumeSpecName "kube-api-access-tvcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.526060 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts" (OuterVolumeSpecName: "scripts") pod "75249388-3798-4187-b09f-2e2bdfb0fd85" (UID: "75249388-3798-4187-b09f-2e2bdfb0fd85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.538350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75249388-3798-4187-b09f-2e2bdfb0fd85" (UID: "75249388-3798-4187-b09f-2e2bdfb0fd85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.552756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data" (OuterVolumeSpecName: "config-data") pod "75249388-3798-4187-b09f-2e2bdfb0fd85" (UID: "75249388-3798-4187-b09f-2e2bdfb0fd85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.563027 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.565660 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.601031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.605515 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvcf8\" (UniqueName: \"kubernetes.io/projected/75249388-3798-4187-b09f-2e2bdfb0fd85-kube-api-access-tvcf8\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.605550 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.605564 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.605578 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75249388-3798-4187-b09f-2e2bdfb0fd85-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.716731 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.782891 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:48:27 crc kubenswrapper[4766]: I0129 11:48:27.783168 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="dnsmasq-dns" containerID="cri-o://15289de76d9cc3802c9a420d29af49465b6a7477dbc7383379a8dccdbe753045" gracePeriod=10 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.116076 4766 generic.go:334] "Generic (PLEG): container finished" podID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerID="15289de76d9cc3802c9a420d29af49465b6a7477dbc7383379a8dccdbe753045" exitCode=0 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.116184 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" event={"ID":"9037dd54-3cca-491b-9f1d-27393d6ec544","Type":"ContainerDied","Data":"15289de76d9cc3802c9a420d29af49465b6a7477dbc7383379a8dccdbe753045"} Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.126537 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lmqls" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.126546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lmqls" event={"ID":"75249388-3798-4187-b09f-2e2bdfb0fd85","Type":"ContainerDied","Data":"b3ffeed11b51e4afd51f385fc4108e6b8d7119be2bee80973941a12dd7f3d28e"} Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.126611 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3ffeed11b51e4afd51f385fc4108e6b8d7119be2bee80973941a12dd7f3d28e" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.196927 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.234148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.293895 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.294153 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-log" containerID="cri-o://ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d" gracePeriod=30 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.294275 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-api" containerID="cri-o://1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba" gracePeriod=30 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.306517 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.306854 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.350658 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.350860 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-log" containerID="cri-o://faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" gracePeriod=30 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.351268 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-metadata" containerID="cri-o://984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" gracePeriod=30 Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.436870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.436933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.437107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.437170 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw62d\" (UniqueName: \"kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.437211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.437245 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb\") pod \"9037dd54-3cca-491b-9f1d-27393d6ec544\" (UID: \"9037dd54-3cca-491b-9f1d-27393d6ec544\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.455747 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d" (OuterVolumeSpecName: "kube-api-access-qw62d") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "kube-api-access-qw62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.532753 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config" (OuterVolumeSpecName: "config") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.532910 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.534119 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.541718 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw62d\" (UniqueName: \"kubernetes.io/projected/9037dd54-3cca-491b-9f1d-27393d6ec544-kube-api-access-qw62d\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.541749 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.541805 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.541817 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.549040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.557850 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9037dd54-3cca-491b-9f1d-27393d6ec544" (UID: "9037dd54-3cca-491b-9f1d-27393d6ec544"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.643468 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.643508 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9037dd54-3cca-491b-9f1d-27393d6ec544-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.674046 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.744147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data\") pod \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.744216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle\") pod \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.744295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh8bg\" (UniqueName: \"kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg\") pod \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.744529 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts\") pod \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\" (UID: \"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.757401 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg" (OuterVolumeSpecName: "kube-api-access-qh8bg") pod "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" (UID: "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0"). InnerVolumeSpecName "kube-api-access-qh8bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.761679 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts" (OuterVolumeSpecName: "scripts") pod "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" (UID: "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.777604 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.785126 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data" (OuterVolumeSpecName: "config-data") pod "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" (UID: "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.807627 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" (UID: "2f68f3b0-f008-4a27-a250-9efe5bdf5fa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.847051 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.847098 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.847111 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh8bg\" (UniqueName: \"kubernetes.io/projected/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-kube-api-access-qh8bg\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.847124 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: E0129 11:48:28.864578 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4287068b_6ec9_41df_967a_63f5119e415b.slice/crio-conmon-984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4287068b_6ec9_41df_967a_63f5119e415b.slice/crio-984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.906105 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.948821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle\") pod \"4287068b-6ec9-41df-967a-63f5119e415b\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.949215 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data\") pod \"4287068b-6ec9-41df-967a-63f5119e415b\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.949420 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs\") pod \"4287068b-6ec9-41df-967a-63f5119e415b\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.949632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwcsk\" (UniqueName: \"kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk\") pod \"4287068b-6ec9-41df-967a-63f5119e415b\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.949730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs\") pod \"4287068b-6ec9-41df-967a-63f5119e415b\" (UID: \"4287068b-6ec9-41df-967a-63f5119e415b\") " Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.950144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs" (OuterVolumeSpecName: "logs") pod "4287068b-6ec9-41df-967a-63f5119e415b" (UID: "4287068b-6ec9-41df-967a-63f5119e415b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.950535 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4287068b-6ec9-41df-967a-63f5119e415b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.956263 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk" (OuterVolumeSpecName: "kube-api-access-vwcsk") pod "4287068b-6ec9-41df-967a-63f5119e415b" (UID: "4287068b-6ec9-41df-967a-63f5119e415b"). InnerVolumeSpecName "kube-api-access-vwcsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.982719 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data" (OuterVolumeSpecName: "config-data") pod "4287068b-6ec9-41df-967a-63f5119e415b" (UID: "4287068b-6ec9-41df-967a-63f5119e415b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.989625 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4287068b-6ec9-41df-967a-63f5119e415b" (UID: "4287068b-6ec9-41df-967a-63f5119e415b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:28 crc kubenswrapper[4766]: I0129 11:48:28.997303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4287068b-6ec9-41df-967a-63f5119e415b" (UID: "4287068b-6ec9-41df-967a-63f5119e415b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.052725 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwcsk\" (UniqueName: \"kubernetes.io/projected/4287068b-6ec9-41df-967a-63f5119e415b-kube-api-access-vwcsk\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.052772 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.052783 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.052796 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287068b-6ec9-41df-967a-63f5119e415b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.180519 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerID="ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d" exitCode=143 Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.180621 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerDied","Data":"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.208201 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" event={"ID":"9037dd54-3cca-491b-9f1d-27393d6ec544","Type":"ContainerDied","Data":"e5bdd1a5cbdc60529c9edf4762523068a5b2f3d68cab4cb2e6c4146c98f2554e"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.208526 4766 scope.go:117] "RemoveContainer" containerID="15289de76d9cc3802c9a420d29af49465b6a7477dbc7383379a8dccdbe753045" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.208790 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jsqcd" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.235531 4766 generic.go:334] "Generic (PLEG): container finished" podID="4287068b-6ec9-41df-967a-63f5119e415b" containerID="984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" exitCode=0 Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.235565 4766 generic.go:334] "Generic (PLEG): container finished" podID="4287068b-6ec9-41df-967a-63f5119e415b" containerID="faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" exitCode=143 Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.235657 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.244471 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerDied","Data":"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293124 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293411 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-log" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293447 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-log" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-metadata" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-metadata" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293493 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="dnsmasq-dns" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293501 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="dnsmasq-dns" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293518 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" containerName="nova-cell1-conductor-db-sync" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293525 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" containerName="nova-cell1-conductor-db-sync" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293561 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="init" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293570 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="init" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.293585 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75249388-3798-4187-b09f-2e2bdfb0fd85" containerName="nova-manage" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293591 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75249388-3798-4187-b09f-2e2bdfb0fd85" containerName="nova-manage" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293775 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-log" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293786 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4287068b-6ec9-41df-967a-63f5119e415b" containerName="nova-metadata-metadata" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293840 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" containerName="nova-cell1-conductor-db-sync" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293860 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" containerName="dnsmasq-dns" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.293909 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75249388-3798-4187-b09f-2e2bdfb0fd85" containerName="nova-manage" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.294492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerDied","Data":"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.294519 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4287068b-6ec9-41df-967a-63f5119e415b","Type":"ContainerDied","Data":"dc7560590adb50344b44cb35aeee089b9f9ffbe1efd952d3a22239e5f8a9b89d"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.294531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5zsbb" event={"ID":"2f68f3b0-f008-4a27-a250-9efe5bdf5fa0","Type":"ContainerDied","Data":"683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c"} Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.294542 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683aa9116e2e5bb6259c544f9465613c432fa21251db835d6f546d33303bb33c" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.294614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.296578 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.299365 4766 scope.go:117] "RemoveContainer" containerID="be0ad3836c6733b6b9f926a905818bde7dd60a66fcafdbaf73d9608859ca9817" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.301601 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.343303 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.359988 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbn4j\" (UniqueName: \"kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.360067 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.360263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.371001 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jsqcd"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.382174 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.389587 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.398915 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.400343 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.403538 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.403790 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.422973 4766 scope.go:117] "RemoveContainer" containerID="984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.432126 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.460336 4766 scope.go:117] "RemoveContainer" containerID="faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.461513 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.461675 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.461813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.462043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk57p\" (UniqueName: \"kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.462154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbn4j\" (UniqueName: \"kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.462261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.462357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.462566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.465323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.467248 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.478995 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbn4j\" (UniqueName: \"kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j\") pod \"nova-cell1-conductor-0\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.558256 4766 scope.go:117] "RemoveContainer" containerID="984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.559016 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30\": container with ID starting with 984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30 not found: ID does not exist" containerID="984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.559073 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30"} err="failed to get container status \"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30\": rpc error: code = NotFound desc = could not find container \"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30\": container with ID starting with 984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30 not found: ID does not exist" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.559381 4766 scope.go:117] "RemoveContainer" containerID="faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" Jan 29 11:48:29 crc kubenswrapper[4766]: E0129 11:48:29.559866 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f\": container with ID starting with faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f not found: ID does not exist" containerID="faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.559915 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f"} err="failed to get container status \"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f\": rpc error: code = NotFound desc = could not find container \"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f\": container with ID starting with faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f not found: ID does not exist" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.559945 4766 scope.go:117] "RemoveContainer" containerID="984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.560584 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30"} err="failed to get container status \"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30\": rpc error: code = NotFound desc = could not find container \"984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30\": container with ID starting with 984119a4d97798a90490cc75c0e3e66a124aa4c4c772770e1eb97c05c05b8e30 not found: ID does not exist" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.560693 4766 scope.go:117] "RemoveContainer" containerID="faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.561029 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f"} err="failed to get container status \"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f\": rpc error: code = NotFound desc = could not find container \"faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f\": container with ID starting with faa5ec491805cfa66971ca999d79c07d1dc7358a373e8c992a2b5bc64ef7ef8f not found: ID does not exist" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.564223 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk57p\" (UniqueName: \"kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.564287 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.564372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.564427 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.564458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.565369 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.574122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.574443 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.578633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.582967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk57p\" (UniqueName: \"kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p\") pod \"nova-metadata-0\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " pod="openstack/nova-metadata-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.696800 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:29 crc kubenswrapper[4766]: I0129 11:48:29.755613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:48:30 crc kubenswrapper[4766]: I0129 11:48:30.232977 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:48:30 crc kubenswrapper[4766]: I0129 11:48:30.260243 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerName="nova-scheduler-scheduler" containerID="cri-o://2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" gracePeriod=30 Jan 29 11:48:30 crc kubenswrapper[4766]: I0129 11:48:30.260509 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c6eae2b-18a8-4a82-95e2-4940490b1678","Type":"ContainerStarted","Data":"1a08eda308e5b86589b10f48609b53f6b94314846217f23b59d88ddacddf3fc1"} Jan 29 11:48:30 crc kubenswrapper[4766]: I0129 11:48:30.301156 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.234680 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4287068b-6ec9-41df-967a-63f5119e415b" path="/var/lib/kubelet/pods/4287068b-6ec9-41df-967a-63f5119e415b/volumes" Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.235530 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9037dd54-3cca-491b-9f1d-27393d6ec544" path="/var/lib/kubelet/pods/9037dd54-3cca-491b-9f1d-27393d6ec544/volumes" Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.273051 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerStarted","Data":"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4"} Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.273093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerStarted","Data":"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd"} Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.273105 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerStarted","Data":"5418adef69569ce39d71851d2916e5a2da795286f448771809f6ef6537fc28d7"} Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.275856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c6eae2b-18a8-4a82-95e2-4940490b1678","Type":"ContainerStarted","Data":"004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f"} Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.276591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.302351 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.30232663 podStartE2EDuration="2.30232663s" podCreationTimestamp="2026-01-29 11:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:31.293165189 +0000 UTC m=+1648.405558250" watchObservedRunningTime="2026-01-29 11:48:31.30232663 +0000 UTC m=+1648.414719651" Jan 29 11:48:31 crc kubenswrapper[4766]: I0129 11:48:31.319699 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.319678467 podStartE2EDuration="2.319678467s" podCreationTimestamp="2026-01-29 11:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:31.316206846 +0000 UTC m=+1648.428599877" watchObservedRunningTime="2026-01-29 11:48:31.319678467 +0000 UTC m=+1648.432071468" Jan 29 11:48:32 crc kubenswrapper[4766]: E0129 11:48:32.564653 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:48:32 crc kubenswrapper[4766]: E0129 11:48:32.566426 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:48:32 crc kubenswrapper[4766]: E0129 11:48:32.568818 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:48:32 crc kubenswrapper[4766]: E0129 11:48:32.568873 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerName="nova-scheduler-scheduler" Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.872375 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.955470 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle\") pod \"2f103e36-e206-4730-b776-9d9f0bc9b264\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.955565 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5vs8\" (UniqueName: \"kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8\") pod \"2f103e36-e206-4730-b776-9d9f0bc9b264\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.955647 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data\") pod \"2f103e36-e206-4730-b776-9d9f0bc9b264\" (UID: \"2f103e36-e206-4730-b776-9d9f0bc9b264\") " Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.961161 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8" (OuterVolumeSpecName: "kube-api-access-r5vs8") pod "2f103e36-e206-4730-b776-9d9f0bc9b264" (UID: "2f103e36-e206-4730-b776-9d9f0bc9b264"). InnerVolumeSpecName "kube-api-access-r5vs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.986244 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f103e36-e206-4730-b776-9d9f0bc9b264" (UID: "2f103e36-e206-4730-b776-9d9f0bc9b264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:33 crc kubenswrapper[4766]: I0129 11:48:33.989774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data" (OuterVolumeSpecName: "config-data") pod "2f103e36-e206-4730-b776-9d9f0bc9b264" (UID: "2f103e36-e206-4730-b776-9d9f0bc9b264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.058108 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.058156 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5vs8\" (UniqueName: \"kubernetes.io/projected/2f103e36-e206-4730-b776-9d9f0bc9b264-kube-api-access-r5vs8\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.058174 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f103e36-e206-4730-b776-9d9f0bc9b264-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.068970 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.159224 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85l94\" (UniqueName: \"kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94\") pod \"aa981e86-43ce-42d8-973a-abc1097dd8c9\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.159396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle\") pod \"aa981e86-43ce-42d8-973a-abc1097dd8c9\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.159443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data\") pod \"aa981e86-43ce-42d8-973a-abc1097dd8c9\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.159477 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs\") pod \"aa981e86-43ce-42d8-973a-abc1097dd8c9\" (UID: \"aa981e86-43ce-42d8-973a-abc1097dd8c9\") " Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.160170 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs" (OuterVolumeSpecName: "logs") pod "aa981e86-43ce-42d8-973a-abc1097dd8c9" (UID: "aa981e86-43ce-42d8-973a-abc1097dd8c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.162127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94" (OuterVolumeSpecName: "kube-api-access-85l94") pod "aa981e86-43ce-42d8-973a-abc1097dd8c9" (UID: "aa981e86-43ce-42d8-973a-abc1097dd8c9"). InnerVolumeSpecName "kube-api-access-85l94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.181549 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data" (OuterVolumeSpecName: "config-data") pod "aa981e86-43ce-42d8-973a-abc1097dd8c9" (UID: "aa981e86-43ce-42d8-973a-abc1097dd8c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.185439 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa981e86-43ce-42d8-973a-abc1097dd8c9" (UID: "aa981e86-43ce-42d8-973a-abc1097dd8c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.262360 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.262445 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa981e86-43ce-42d8-973a-abc1097dd8c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.262459 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa981e86-43ce-42d8-973a-abc1097dd8c9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.262519 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85l94\" (UniqueName: \"kubernetes.io/projected/aa981e86-43ce-42d8-973a-abc1097dd8c9-kube-api-access-85l94\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.316820 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerID="1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba" exitCode=0 Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.316909 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.316933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerDied","Data":"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba"} Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.317001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa981e86-43ce-42d8-973a-abc1097dd8c9","Type":"ContainerDied","Data":"684e1c44c276befaf71e1913f291874864293d79d54f0d8a9171bef04d144da9"} Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.317020 4766 scope.go:117] "RemoveContainer" containerID="1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.324286 4766 generic.go:334] "Generic (PLEG): container finished" podID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" exitCode=0 Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.324315 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.324332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f103e36-e206-4730-b776-9d9f0bc9b264","Type":"ContainerDied","Data":"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c"} Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.324456 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f103e36-e206-4730-b776-9d9f0bc9b264","Type":"ContainerDied","Data":"bbcfda01f63a3d3640585234256fa765baf7df10d27fa5d01818d6b79eda4554"} Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.354866 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.363128 4766 scope.go:117] "RemoveContainer" containerID="ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.366051 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.375433 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.375938 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-log" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.375963 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-log" Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.376003 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-api" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.376012 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-api" Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.376042 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerName="nova-scheduler-scheduler" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.376053 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerName="nova-scheduler-scheduler" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.376298 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" containerName="nova-scheduler-scheduler" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.376333 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-log" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.376360 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" containerName="nova-api-api" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.377601 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.380194 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.387318 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.394733 4766 scope.go:117] "RemoveContainer" containerID="1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba" Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.397779 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba\": container with ID starting with 1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba not found: ID does not exist" containerID="1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.397825 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba"} err="failed to get container status \"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba\": rpc error: code = NotFound desc = could not find container \"1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba\": container with ID starting with 1230dec16eaaebb2152c1349b3b659606b2320b714954b3c77bb1dbbcb7f86ba not found: ID does not exist" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.397862 4766 scope.go:117] "RemoveContainer" containerID="ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.398011 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.398739 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d\": container with ID starting with ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d not found: ID does not exist" containerID="ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.398769 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d"} err="failed to get container status \"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d\": rpc error: code = NotFound desc = could not find container \"ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d\": container with ID starting with ad72971f41a6734e63b05e49cb6125884c04769a553e4cefbff9125c8b73c26d not found: ID does not exist" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.398789 4766 scope.go:117] "RemoveContainer" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.412788 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.420564 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.422035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.427184 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.443249 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.467195 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.467343 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.467451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.467485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkb2\" (UniqueName: \"kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.481843 4766 scope.go:117] "RemoveContainer" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" Jan 29 11:48:34 crc kubenswrapper[4766]: E0129 11:48:34.482356 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c\": container with ID starting with 2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c not found: ID does not exist" containerID="2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.482394 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c"} err="failed to get container status \"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c\": rpc error: code = NotFound desc = could not find container \"2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c\": container with ID starting with 2aa8e942a3e1dfe2d25f4e43f92cb6937d91c09b030cf4be087491b37197791c not found: ID does not exist" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570405 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570596 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkb2\" (UniqueName: \"kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.570897 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhpc\" (UniqueName: \"kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.571120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.573824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.574503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.586870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkb2\" (UniqueName: \"kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2\") pod \"nova-api-0\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.672176 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.672254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.672371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lhpc\" (UniqueName: \"kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.677767 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.678817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.688257 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lhpc\" (UniqueName: \"kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc\") pod \"nova-scheduler-0\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " pod="openstack/nova-scheduler-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.745751 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.757730 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.758603 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:48:34 crc kubenswrapper[4766]: I0129 11:48:34.815327 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:48:35 crc kubenswrapper[4766]: W0129 11:48:35.187398 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fbb9177_98aa_46ac_a894_3ffa1ff170f2.slice/crio-337469c261e780ac2a00a4f2bd9d63095a9ac1f11401ae402c2e9f7d125c5f0a WatchSource:0}: Error finding container 337469c261e780ac2a00a4f2bd9d63095a9ac1f11401ae402c2e9f7d125c5f0a: Status 404 returned error can't find the container with id 337469c261e780ac2a00a4f2bd9d63095a9ac1f11401ae402c2e9f7d125c5f0a Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.191696 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.235561 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f103e36-e206-4730-b776-9d9f0bc9b264" path="/var/lib/kubelet/pods/2f103e36-e206-4730-b776-9d9f0bc9b264/volumes" Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.236567 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa981e86-43ce-42d8-973a-abc1097dd8c9" path="/var/lib/kubelet/pods/aa981e86-43ce-42d8-973a-abc1097dd8c9/volumes" Jan 29 11:48:35 crc kubenswrapper[4766]: W0129 11:48:35.301787 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0108fc2a_9d13_4196_bb57_b72855958161.slice/crio-3df19e89d28136967ee5c7a302610ad5db307c14c83bbbfcbb9962cbb10ab578 WatchSource:0}: Error finding container 3df19e89d28136967ee5c7a302610ad5db307c14c83bbbfcbb9962cbb10ab578: Status 404 returned error can't find the container with id 3df19e89d28136967ee5c7a302610ad5db307c14c83bbbfcbb9962cbb10ab578 Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.301815 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.337063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerStarted","Data":"337469c261e780ac2a00a4f2bd9d63095a9ac1f11401ae402c2e9f7d125c5f0a"} Jan 29 11:48:35 crc kubenswrapper[4766]: I0129 11:48:35.338173 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0108fc2a-9d13-4196-bb57-b72855958161","Type":"ContainerStarted","Data":"3df19e89d28136967ee5c7a302610ad5db307c14c83bbbfcbb9962cbb10ab578"} Jan 29 11:48:36 crc kubenswrapper[4766]: I0129 11:48:36.349730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0108fc2a-9d13-4196-bb57-b72855958161","Type":"ContainerStarted","Data":"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915"} Jan 29 11:48:36 crc kubenswrapper[4766]: I0129 11:48:36.352201 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerStarted","Data":"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113"} Jan 29 11:48:36 crc kubenswrapper[4766]: I0129 11:48:36.352725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerStarted","Data":"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55"} Jan 29 11:48:36 crc kubenswrapper[4766]: I0129 11:48:36.369593 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.369574801 podStartE2EDuration="2.369574801s" podCreationTimestamp="2026-01-29 11:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:36.368159594 +0000 UTC m=+1653.480552655" watchObservedRunningTime="2026-01-29 11:48:36.369574801 +0000 UTC m=+1653.481967812" Jan 29 11:48:36 crc kubenswrapper[4766]: I0129 11:48:36.385642 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.385623524 podStartE2EDuration="2.385623524s" podCreationTimestamp="2026-01-29 11:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:36.384102594 +0000 UTC m=+1653.496495605" watchObservedRunningTime="2026-01-29 11:48:36.385623524 +0000 UTC m=+1653.498016545" Jan 29 11:48:39 crc kubenswrapper[4766]: I0129 11:48:39.723472 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 11:48:39 crc kubenswrapper[4766]: I0129 11:48:39.758278 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:48:39 crc kubenswrapper[4766]: I0129 11:48:39.758640 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:48:39 crc kubenswrapper[4766]: I0129 11:48:39.816215 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:48:40 crc kubenswrapper[4766]: I0129 11:48:40.763148 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:40 crc kubenswrapper[4766]: I0129 11:48:40.766663 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:41 crc kubenswrapper[4766]: I0129 11:48:41.228686 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:48:41 crc kubenswrapper[4766]: E0129 11:48:41.228916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:48:44 crc kubenswrapper[4766]: I0129 11:48:44.133198 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:48:44 crc kubenswrapper[4766]: I0129 11:48:44.746800 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:48:44 crc kubenswrapper[4766]: I0129 11:48:44.746867 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:48:44 crc kubenswrapper[4766]: I0129 11:48:44.816072 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:48:44 crc kubenswrapper[4766]: I0129 11:48:44.846141 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:48:45 crc kubenswrapper[4766]: I0129 11:48:45.480603 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:48:45 crc kubenswrapper[4766]: I0129 11:48:45.828644 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:45 crc kubenswrapper[4766]: I0129 11:48:45.828681 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:48:47 crc kubenswrapper[4766]: I0129 11:48:47.769678 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:47 crc kubenswrapper[4766]: I0129 11:48:47.769953 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" containerName="kube-state-metrics" containerID="cri-o://b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2" gracePeriod=30 Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.317056 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.431403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6s7h\" (UniqueName: \"kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h\") pod \"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4\" (UID: \"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4\") " Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.437088 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h" (OuterVolumeSpecName: "kube-api-access-z6s7h") pod "c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" (UID: "c10a3d13-c16f-41fa-83ac-c3454b7ed6c4"). InnerVolumeSpecName "kube-api-access-z6s7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.469555 4766 generic.go:334] "Generic (PLEG): container finished" podID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" containerID="b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2" exitCode=2 Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.469614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4","Type":"ContainerDied","Data":"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2"} Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.469643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c10a3d13-c16f-41fa-83ac-c3454b7ed6c4","Type":"ContainerDied","Data":"cecdbf445115816e29318051e1778a98c9f25d03aeb20159fd09caaa8f0ee697"} Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.469666 4766 scope.go:117] "RemoveContainer" containerID="b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.469671 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.505629 4766 scope.go:117] "RemoveContainer" containerID="b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2" Jan 29 11:48:48 crc kubenswrapper[4766]: E0129 11:48:48.506371 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2\": container with ID starting with b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2 not found: ID does not exist" containerID="b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.506492 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2"} err="failed to get container status \"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2\": rpc error: code = NotFound desc = could not find container \"b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2\": container with ID starting with b0a884b3bf6f2c44a280cd0feb5b6a5eea03af1a7f5108f2b66b05a812df65b2 not found: ID does not exist" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.513981 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.524519 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.533364 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:48 crc kubenswrapper[4766]: E0129 11:48:48.533753 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" containerName="kube-state-metrics" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.533771 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" containerName="kube-state-metrics" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.533927 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" containerName="kube-state-metrics" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.534076 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6s7h\" (UniqueName: \"kubernetes.io/projected/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4-kube-api-access-z6s7h\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.534490 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.536453 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.540125 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.578609 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.635253 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l6r\" (UniqueName: \"kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.635294 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.635343 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.635377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.736698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.736764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.736868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59l6r\" (UniqueName: \"kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.736887 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.741625 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.742613 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.744961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.757818 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59l6r\" (UniqueName: \"kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r\") pod \"kube-state-metrics-0\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " pod="openstack/kube-state-metrics-0" Jan 29 11:48:48 crc kubenswrapper[4766]: I0129 11:48:48.862915 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.235252 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10a3d13-c16f-41fa-83ac-c3454b7ed6c4" path="/var/lib/kubelet/pods/c10a3d13-c16f-41fa-83ac-c3454b7ed6c4/volumes" Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.316447 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.325846 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.479988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ed02fac-f569-47e7-a243-6d0e37dc6c05","Type":"ContainerStarted","Data":"c156cf6ad7a4e32c907b5cc716c7585bd7b76367b9d53c43e4e1b9e7645295f3"} Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.531308 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.533272 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-central-agent" containerID="cri-o://937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429" gracePeriod=30 Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.533321 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="proxy-httpd" containerID="cri-o://298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f" gracePeriod=30 Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.533321 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="sg-core" containerID="cri-o://b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352" gracePeriod=30 Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.533361 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-notification-agent" containerID="cri-o://840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12" gracePeriod=30 Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.763354 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.763760 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.770837 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:48:49 crc kubenswrapper[4766]: I0129 11:48:49.772607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.493566 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ed02fac-f569-47e7-a243-6d0e37dc6c05","Type":"ContainerStarted","Data":"98b39f027e94d9b7e2c9e0f75cbec74515a6877539cbea210a05a9de92134411"} Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.493922 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.495973 4766 generic.go:334] "Generic (PLEG): container finished" podID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerID="298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f" exitCode=0 Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.496000 4766 generic.go:334] "Generic (PLEG): container finished" podID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerID="b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352" exitCode=2 Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.496008 4766 generic.go:334] "Generic (PLEG): container finished" podID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerID="937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429" exitCode=0 Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.496533 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerDied","Data":"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f"} Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.496577 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerDied","Data":"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352"} Jan 29 11:48:50 crc kubenswrapper[4766]: I0129 11:48:50.496590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerDied","Data":"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429"} Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.224166 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:48:53 crc kubenswrapper[4766]: E0129 11:48:53.224853 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.402300 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.424386 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.053311178 podStartE2EDuration="5.424368269s" podCreationTimestamp="2026-01-29 11:48:48 +0000 UTC" firstStartedPulling="2026-01-29 11:48:49.325626201 +0000 UTC m=+1666.438019212" lastFinishedPulling="2026-01-29 11:48:49.696683292 +0000 UTC m=+1666.809076303" observedRunningTime="2026-01-29 11:48:50.521996206 +0000 UTC m=+1667.634389217" watchObservedRunningTime="2026-01-29 11:48:53.424368269 +0000 UTC m=+1670.536761280" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.520844 4766 generic.go:334] "Generic (PLEG): container finished" podID="249e533d-989c-4441-875f-23c15d261e83" containerID="4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d" exitCode=137 Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.520889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"249e533d-989c-4441-875f-23c15d261e83","Type":"ContainerDied","Data":"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d"} Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.520902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.520925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"249e533d-989c-4441-875f-23c15d261e83","Type":"ContainerDied","Data":"5b9359b54269a77ffcd341acf6e92c1bf351341b44a1df75f9cf64fe5829e941"} Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.520944 4766 scope.go:117] "RemoveContainer" containerID="4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.527799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5v8b\" (UniqueName: \"kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b\") pod \"249e533d-989c-4441-875f-23c15d261e83\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.527873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle\") pod \"249e533d-989c-4441-875f-23c15d261e83\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.527905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data\") pod \"249e533d-989c-4441-875f-23c15d261e83\" (UID: \"249e533d-989c-4441-875f-23c15d261e83\") " Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.542433 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b" (OuterVolumeSpecName: "kube-api-access-m5v8b") pod "249e533d-989c-4441-875f-23c15d261e83" (UID: "249e533d-989c-4441-875f-23c15d261e83"). InnerVolumeSpecName "kube-api-access-m5v8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.552608 4766 scope.go:117] "RemoveContainer" containerID="4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d" Jan 29 11:48:53 crc kubenswrapper[4766]: E0129 11:48:53.553258 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d\": container with ID starting with 4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d not found: ID does not exist" containerID="4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.553315 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d"} err="failed to get container status \"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d\": rpc error: code = NotFound desc = could not find container \"4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d\": container with ID starting with 4f812b803666bccb65380df3cfcb793f9323a751fe8d6d20a9c1ffc6bbb1d49d not found: ID does not exist" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.563611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data" (OuterVolumeSpecName: "config-data") pod "249e533d-989c-4441-875f-23c15d261e83" (UID: "249e533d-989c-4441-875f-23c15d261e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.564023 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "249e533d-989c-4441-875f-23c15d261e83" (UID: "249e533d-989c-4441-875f-23c15d261e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.630256 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5v8b\" (UniqueName: \"kubernetes.io/projected/249e533d-989c-4441-875f-23c15d261e83-kube-api-access-m5v8b\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.630295 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.630306 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249e533d-989c-4441-875f-23c15d261e83-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.866320 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.881992 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.893374 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:53 crc kubenswrapper[4766]: E0129 11:48:53.893799 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249e533d-989c-4441-875f-23c15d261e83" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.893821 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="249e533d-989c-4441-875f-23c15d261e83" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.894021 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="249e533d-989c-4441-875f-23c15d261e83" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.894708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.897647 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.897902 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.898023 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:48:53 crc kubenswrapper[4766]: I0129 11:48:53.902349 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.014359 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.038313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.038424 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rjc\" (UniqueName: \"kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.038574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.038598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.038684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.139377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.139874 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.139981 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140515 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140583 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140609 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140687 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6d2t\" (UniqueName: \"kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t\") pod \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\" (UID: \"eff8bd7d-54dd-4551-b4cc-ba2937c82324\") " Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.140976 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.141046 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.141090 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7rjc\" (UniqueName: \"kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.141282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.141440 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.141518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.145652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.145833 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.146534 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts" (OuterVolumeSpecName: "scripts") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.149329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.150293 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.152759 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t" (OuterVolumeSpecName: "kube-api-access-q6d2t") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "kube-api-access-q6d2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.158884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7rjc\" (UniqueName: \"kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.170478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.218878 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.229792 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.242640 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff8bd7d-54dd-4551-b4cc-ba2937c82324-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.242682 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.242690 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.242704 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6d2t\" (UniqueName: \"kubernetes.io/projected/eff8bd7d-54dd-4551-b4cc-ba2937c82324-kube-api-access-q6d2t\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.242712 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.243215 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data" (OuterVolumeSpecName: "config-data") pod "eff8bd7d-54dd-4551-b4cc-ba2937c82324" (UID: "eff8bd7d-54dd-4551-b4cc-ba2937c82324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.345259 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff8bd7d-54dd-4551-b4cc-ba2937c82324-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.533763 4766 generic.go:334] "Generic (PLEG): container finished" podID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerID="840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12" exitCode=0 Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.533836 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.533847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerDied","Data":"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12"} Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.534110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff8bd7d-54dd-4551-b4cc-ba2937c82324","Type":"ContainerDied","Data":"85534ecba83134ab364bfd51817ee8e8a8da1eb9e7d096bc9c0befd134ef37f3"} Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.534128 4766 scope.go:117] "RemoveContainer" containerID="298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.562290 4766 scope.go:117] "RemoveContainer" containerID="b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.568730 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.579835 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.581791 4766 scope.go:117] "RemoveContainer" containerID="840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.598272 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.598800 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="proxy-httpd" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.598825 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="proxy-httpd" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.598847 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-notification-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.598861 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-notification-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.598872 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="sg-core" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.598881 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="sg-core" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.598920 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-central-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.598931 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-central-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.599179 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-notification-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.599215 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="proxy-httpd" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.599232 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="sg-core" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.599247 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" containerName="ceilometer-central-agent" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.601103 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.603078 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.603306 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.603450 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.610566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.624629 4766 scope.go:117] "RemoveContainer" containerID="937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.643955 4766 scope.go:117] "RemoveContainer" containerID="298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.644771 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f\": container with ID starting with 298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f not found: ID does not exist" containerID="298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.644815 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f"} err="failed to get container status \"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f\": rpc error: code = NotFound desc = could not find container \"298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f\": container with ID starting with 298b49517787c33b63dd76f5ae50caf467e5e60fa5ea6499c26fdfdf7c066f5f not found: ID does not exist" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.644842 4766 scope.go:117] "RemoveContainer" containerID="b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.645132 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352\": container with ID starting with b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352 not found: ID does not exist" containerID="b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.645166 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352"} err="failed to get container status \"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352\": rpc error: code = NotFound desc = could not find container \"b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352\": container with ID starting with b461980b50a29cf644096082e55a47d40d39bf90cfb61e2ea7d9c02a50a84352 not found: ID does not exist" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.645188 4766 scope.go:117] "RemoveContainer" containerID="840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.645957 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12\": container with ID starting with 840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12 not found: ID does not exist" containerID="840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.645991 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12"} err="failed to get container status \"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12\": rpc error: code = NotFound desc = could not find container \"840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12\": container with ID starting with 840b33106c05a1bc82e600553cfa4ddad2df8e43dac113d70a41ad0582941a12 not found: ID does not exist" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.646011 4766 scope.go:117] "RemoveContainer" containerID="937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429" Jan 29 11:48:54 crc kubenswrapper[4766]: E0129 11:48:54.646202 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429\": container with ID starting with 937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429 not found: ID does not exist" containerID="937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.646226 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429"} err="failed to get container status \"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429\": rpc error: code = NotFound desc = could not find container \"937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429\": container with ID starting with 937ca40ed4fd7b41e809e3599fce3939551af21d598c3b2ecf3a003dd5bf2429 not found: ID does not exist" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648637 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648707 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648816 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bjk\" (UniqueName: \"kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.648902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: W0129 11:48:54.679994 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod982e76a1_f77f_4569_bb8e_f524dba573ca.slice/crio-ffdaeee44f7feedb9a3d5ecebc0a1ea10d4686290cc6e4d813b51e7e45afe566 WatchSource:0}: Error finding container ffdaeee44f7feedb9a3d5ecebc0a1ea10d4686290cc6e4d813b51e7e45afe566: Status 404 returned error can't find the container with id ffdaeee44f7feedb9a3d5ecebc0a1ea10d4686290cc6e4d813b51e7e45afe566 Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.689611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9bjk\" (UniqueName: \"kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750839 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.750956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.751005 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.751026 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.753926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.754165 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.754661 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.757123 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.757253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.757364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.760911 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.770949 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.782546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9bjk\" (UniqueName: \"kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk\") pod \"ceilometer-0\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " pod="openstack/ceilometer-0" Jan 29 11:48:54 crc kubenswrapper[4766]: I0129 11:48:54.924218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.237527 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249e533d-989c-4441-875f-23c15d261e83" path="/var/lib/kubelet/pods/249e533d-989c-4441-875f-23c15d261e83/volumes" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.238454 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff8bd7d-54dd-4551-b4cc-ba2937c82324" path="/var/lib/kubelet/pods/eff8bd7d-54dd-4551-b4cc-ba2937c82324/volumes" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.363479 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.545700 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"982e76a1-f77f-4569-bb8e-f524dba573ca","Type":"ContainerStarted","Data":"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa"} Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.546080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"982e76a1-f77f-4569-bb8e-f524dba573ca","Type":"ContainerStarted","Data":"ffdaeee44f7feedb9a3d5ecebc0a1ea10d4686290cc6e4d813b51e7e45afe566"} Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.548360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerStarted","Data":"b57f4f8810f091ba1d708fc69691042c49213b1f4d97c74bfefd0d69b4d8b90f"} Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.548602 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.552600 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.570778 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.570756652 podStartE2EDuration="2.570756652s" podCreationTimestamp="2026-01-29 11:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:55.563559532 +0000 UTC m=+1672.675952543" watchObservedRunningTime="2026-01-29 11:48:55.570756652 +0000 UTC m=+1672.683149683" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.733483 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.735478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.759055 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.875819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpm9s\" (UniqueName: \"kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.875866 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.875894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.876026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.876074 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.876152 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978298 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpm9s\" (UniqueName: \"kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.978471 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.979246 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.979435 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.979485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.979588 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:55 crc kubenswrapper[4766]: I0129 11:48:55.979655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:56 crc kubenswrapper[4766]: I0129 11:48:56.000800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpm9s\" (UniqueName: \"kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s\") pod \"dnsmasq-dns-cd5cbd7b9-hznsj\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:56 crc kubenswrapper[4766]: I0129 11:48:56.070599 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:56 crc kubenswrapper[4766]: I0129 11:48:56.554610 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:48:56 crc kubenswrapper[4766]: I0129 11:48:56.570396 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerStarted","Data":"0c9d3f4b76479725bb9f8d788f9166396b67af0d09eda1166e5c6cfeb4f1d2c9"} Jan 29 11:48:57 crc kubenswrapper[4766]: I0129 11:48:57.579122 4766 generic.go:334] "Generic (PLEG): container finished" podID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerID="579b365946cd3511a4044d9d12ae721717e6d8a5f9a3f4c3c1ce4d75f48b8a40" exitCode=0 Jan 29 11:48:57 crc kubenswrapper[4766]: I0129 11:48:57.579449 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" event={"ID":"d9ea6d98-59cc-4526-bf59-7328c0321f59","Type":"ContainerDied","Data":"579b365946cd3511a4044d9d12ae721717e6d8a5f9a3f4c3c1ce4d75f48b8a40"} Jan 29 11:48:57 crc kubenswrapper[4766]: I0129 11:48:57.579481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" event={"ID":"d9ea6d98-59cc-4526-bf59-7328c0321f59","Type":"ContainerStarted","Data":"a5fbf32070653413c5e083f83b6588d585cc0b28f1da2b460f5cf63726690a93"} Jan 29 11:48:57 crc kubenswrapper[4766]: I0129 11:48:57.583190 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerStarted","Data":"446f8cae715c520e27a998ffb34753091d3733fe5fde0677310e77f355c5d289"} Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.594062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerStarted","Data":"7b5bfa44d299ce58dccb937b68930f86bc6b154df6c95403c2d36800493247b3"} Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.596685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" event={"ID":"d9ea6d98-59cc-4526-bf59-7328c0321f59","Type":"ContainerStarted","Data":"4925aa28bf4f33e9f328ae00bd14e1da8d9f6b2c7f29cdfaaafce38d8720b42b"} Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.597956 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.631386 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" podStartSLOduration=3.631366801 podStartE2EDuration="3.631366801s" podCreationTimestamp="2026-01-29 11:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:48:58.626911574 +0000 UTC m=+1675.739304585" watchObservedRunningTime="2026-01-29 11:48:58.631366801 +0000 UTC m=+1675.743759812" Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.738462 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.738698 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-log" containerID="cri-o://758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55" gracePeriod=30 Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.738776 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-api" containerID="cri-o://6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113" gracePeriod=30 Jan 29 11:48:58 crc kubenswrapper[4766]: I0129 11:48:58.894881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:48:59 crc kubenswrapper[4766]: I0129 11:48:59.033818 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:48:59 crc kubenswrapper[4766]: I0129 11:48:59.234131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:48:59 crc kubenswrapper[4766]: I0129 11:48:59.609302 4766 generic.go:334] "Generic (PLEG): container finished" podID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerID="758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55" exitCode=143 Jan 29 11:48:59 crc kubenswrapper[4766]: I0129 11:48:59.610721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerDied","Data":"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55"} Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.622557 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-central-agent" containerID="cri-o://0c9d3f4b76479725bb9f8d788f9166396b67af0d09eda1166e5c6cfeb4f1d2c9" gracePeriod=30 Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.622933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerStarted","Data":"74d31ffb20655187dd235c5fe7c105ff167c4d753baee155fc5023220797dc0e"} Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.622973 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="proxy-httpd" containerID="cri-o://74d31ffb20655187dd235c5fe7c105ff167c4d753baee155fc5023220797dc0e" gracePeriod=30 Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.623000 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.623033 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="sg-core" containerID="cri-o://7b5bfa44d299ce58dccb937b68930f86bc6b154df6c95403c2d36800493247b3" gracePeriod=30 Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.623073 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-notification-agent" containerID="cri-o://446f8cae715c520e27a998ffb34753091d3733fe5fde0677310e77f355c5d289" gracePeriod=30 Jan 29 11:49:00 crc kubenswrapper[4766]: I0129 11:49:00.643528 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.447507971 podStartE2EDuration="6.64350975s" podCreationTimestamp="2026-01-29 11:48:54 +0000 UTC" firstStartedPulling="2026-01-29 11:48:55.379780903 +0000 UTC m=+1672.492173914" lastFinishedPulling="2026-01-29 11:48:59.575782682 +0000 UTC m=+1676.688175693" observedRunningTime="2026-01-29 11:49:00.643006897 +0000 UTC m=+1677.755399928" watchObservedRunningTime="2026-01-29 11:49:00.64350975 +0000 UTC m=+1677.755902761" Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633561 4766 generic.go:334] "Generic (PLEG): container finished" podID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerID="74d31ffb20655187dd235c5fe7c105ff167c4d753baee155fc5023220797dc0e" exitCode=0 Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633894 4766 generic.go:334] "Generic (PLEG): container finished" podID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerID="7b5bfa44d299ce58dccb937b68930f86bc6b154df6c95403c2d36800493247b3" exitCode=2 Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633910 4766 generic.go:334] "Generic (PLEG): container finished" podID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerID="446f8cae715c520e27a998ffb34753091d3733fe5fde0677310e77f355c5d289" exitCode=0 Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerDied","Data":"74d31ffb20655187dd235c5fe7c105ff167c4d753baee155fc5023220797dc0e"} Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerDied","Data":"7b5bfa44d299ce58dccb937b68930f86bc6b154df6c95403c2d36800493247b3"} Jan 29 11:49:01 crc kubenswrapper[4766]: I0129 11:49:01.633967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerDied","Data":"446f8cae715c520e27a998ffb34753091d3733fe5fde0677310e77f355c5d289"} Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.339168 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.413525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs\") pod \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.413673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data\") pod \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.413734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle\") pod \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.413906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bkb2\" (UniqueName: \"kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2\") pod \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\" (UID: \"5fbb9177-98aa-46ac-a894-3ffa1ff170f2\") " Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.413913 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs" (OuterVolumeSpecName: "logs") pod "5fbb9177-98aa-46ac-a894-3ffa1ff170f2" (UID: "5fbb9177-98aa-46ac-a894-3ffa1ff170f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.414623 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.449338 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2" (OuterVolumeSpecName: "kube-api-access-4bkb2") pod "5fbb9177-98aa-46ac-a894-3ffa1ff170f2" (UID: "5fbb9177-98aa-46ac-a894-3ffa1ff170f2"). InnerVolumeSpecName "kube-api-access-4bkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.456491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data" (OuterVolumeSpecName: "config-data") pod "5fbb9177-98aa-46ac-a894-3ffa1ff170f2" (UID: "5fbb9177-98aa-46ac-a894-3ffa1ff170f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.496053 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fbb9177-98aa-46ac-a894-3ffa1ff170f2" (UID: "5fbb9177-98aa-46ac-a894-3ffa1ff170f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.515935 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.515979 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.515992 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bkb2\" (UniqueName: \"kubernetes.io/projected/5fbb9177-98aa-46ac-a894-3ffa1ff170f2-kube-api-access-4bkb2\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.643457 4766 generic.go:334] "Generic (PLEG): container finished" podID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerID="6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113" exitCode=0 Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.643506 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.643537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerDied","Data":"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113"} Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.643948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fbb9177-98aa-46ac-a894-3ffa1ff170f2","Type":"ContainerDied","Data":"337469c261e780ac2a00a4f2bd9d63095a9ac1f11401ae402c2e9f7d125c5f0a"} Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.643971 4766 scope.go:117] "RemoveContainer" containerID="6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.686240 4766 scope.go:117] "RemoveContainer" containerID="758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.688705 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.721467 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.726266 4766 scope.go:117] "RemoveContainer" containerID="6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113" Jan 29 11:49:02 crc kubenswrapper[4766]: E0129 11:49:02.727142 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113\": container with ID starting with 6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113 not found: ID does not exist" containerID="6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.727194 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113"} err="failed to get container status \"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113\": rpc error: code = NotFound desc = could not find container \"6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113\": container with ID starting with 6db917ed4f89dc93ef26f5981ebb17045bea3e39a008238809bdf181e51e3113 not found: ID does not exist" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.727228 4766 scope.go:117] "RemoveContainer" containerID="758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55" Jan 29 11:49:02 crc kubenswrapper[4766]: E0129 11:49:02.727764 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55\": container with ID starting with 758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55 not found: ID does not exist" containerID="758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.727792 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55"} err="failed to get container status \"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55\": rpc error: code = NotFound desc = could not find container \"758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55\": container with ID starting with 758f4b3d4eae8bb5272af415c6ee02d6c034c0cf51452b2d3749f1f29ca5ce55 not found: ID does not exist" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.743893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:02 crc kubenswrapper[4766]: E0129 11:49:02.744424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-log" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.744441 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-log" Jan 29 11:49:02 crc kubenswrapper[4766]: E0129 11:49:02.744497 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-api" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.744506 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-api" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.744715 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-log" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.744738 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" containerName="nova-api-api" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.745980 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.749004 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.749195 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.749231 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.766748 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.821472 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22t45\" (UniqueName: \"kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.821536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.821846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.821910 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.822038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.822113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924459 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22t45\" (UniqueName: \"kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.924531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.925304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.929185 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.929871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.930082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.931215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:02 crc kubenswrapper[4766]: I0129 11:49:02.942918 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22t45\" (UniqueName: \"kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45\") pod \"nova-api-0\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " pod="openstack/nova-api-0" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.063912 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.240521 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbb9177-98aa-46ac-a894-3ffa1ff170f2" path="/var/lib/kubelet/pods/5fbb9177-98aa-46ac-a894-3ffa1ff170f2/volumes" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.560391 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.658434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerStarted","Data":"75ddd87af93efd31c1d72913b6e8e32e08443dd8e0c56f71d6bfedbd6b4a0404"} Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.662396 4766 generic.go:334] "Generic (PLEG): container finished" podID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerID="0c9d3f4b76479725bb9f8d788f9166396b67af0d09eda1166e5c6cfeb4f1d2c9" exitCode=0 Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.662458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerDied","Data":"0c9d3f4b76479725bb9f8d788f9166396b67af0d09eda1166e5c6cfeb4f1d2c9"} Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.709755 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.845630 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.845950 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846236 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846674 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9bjk\" (UniqueName: \"kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846965 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.847052 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts\") pod \"48a6bbb6-58eb-4649-88d0-a270a189d073\" (UID: \"48a6bbb6-58eb-4649-88d0-a270a189d073\") " Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.846478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.848158 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.848521 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.848610 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48a6bbb6-58eb-4649-88d0-a270a189d073-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.855707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts" (OuterVolumeSpecName: "scripts") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.855762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk" (OuterVolumeSpecName: "kube-api-access-q9bjk") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "kube-api-access-q9bjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.887862 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.918205 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.950623 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9bjk\" (UniqueName: \"kubernetes.io/projected/48a6bbb6-58eb-4649-88d0-a270a189d073-kube-api-access-q9bjk\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.950657 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.950668 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.950676 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.957228 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:03 crc kubenswrapper[4766]: I0129 11:49:03.965893 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data" (OuterVolumeSpecName: "config-data") pod "48a6bbb6-58eb-4649-88d0-a270a189d073" (UID: "48a6bbb6-58eb-4649-88d0-a270a189d073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.052274 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.052744 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a6bbb6-58eb-4649-88d0-a270a189d073-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.225452 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:49:04 crc kubenswrapper[4766]: E0129 11:49:04.225729 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.230682 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.249586 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.675002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerStarted","Data":"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530"} Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.675077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerStarted","Data":"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84"} Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.677916 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.677925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48a6bbb6-58eb-4649-88d0-a270a189d073","Type":"ContainerDied","Data":"b57f4f8810f091ba1d708fc69691042c49213b1f4d97c74bfefd0d69b4d8b90f"} Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.677987 4766 scope.go:117] "RemoveContainer" containerID="74d31ffb20655187dd235c5fe7c105ff167c4d753baee155fc5023220797dc0e" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.713311 4766 scope.go:117] "RemoveContainer" containerID="7b5bfa44d299ce58dccb937b68930f86bc6b154df6c95403c2d36800493247b3" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.713323 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.719775 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.719756936 podStartE2EDuration="2.719756936s" podCreationTimestamp="2026-01-29 11:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:49:04.705302775 +0000 UTC m=+1681.817695796" watchObservedRunningTime="2026-01-29 11:49:04.719756936 +0000 UTC m=+1681.832149957" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.757165 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.758910 4766 scope.go:117] "RemoveContainer" containerID="446f8cae715c520e27a998ffb34753091d3733fe5fde0677310e77f355c5d289" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.767160 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.778501 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:49:04 crc kubenswrapper[4766]: E0129 11:49:04.778919 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="sg-core" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.778931 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="sg-core" Jan 29 11:49:04 crc kubenswrapper[4766]: E0129 11:49:04.778945 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="proxy-httpd" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.778951 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="proxy-httpd" Jan 29 11:49:04 crc kubenswrapper[4766]: E0129 11:49:04.778973 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-notification-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.778979 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-notification-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: E0129 11:49:04.779001 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-central-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.779006 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-central-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.779153 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-notification-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.779169 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="sg-core" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.779183 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="proxy-httpd" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.779194 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" containerName="ceilometer-central-agent" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.781251 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.784217 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.784253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.787231 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.809441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.826842 4766 scope.go:117] "RemoveContainer" containerID="0c9d3f4b76479725bb9f8d788f9166396b67af0d09eda1166e5c6cfeb4f1d2c9" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.873999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874172 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874204 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874234 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8scn\" (UniqueName: \"kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874260 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.874300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.889378 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-22n2k"] Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.890942 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.894101 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.894429 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.898921 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-22n2k"] Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977270 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977322 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977438 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977535 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8scn\" (UniqueName: \"kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kdjg\" (UniqueName: \"kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977915 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.977933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.979131 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.979335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.979689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.982404 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.992398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.993119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.993377 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.993439 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:04 crc kubenswrapper[4766]: I0129 11:49:04.995664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8scn\" (UniqueName: \"kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn\") pod \"ceilometer-0\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " pod="openstack/ceilometer-0" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.079352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.079432 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.079478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.079612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kdjg\" (UniqueName: \"kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.081521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.081778 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.085231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.094118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.095725 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.105557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kdjg\" (UniqueName: \"kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg\") pod \"nova-cell1-cell-mapping-22n2k\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.109826 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.209341 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.248623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a6bbb6-58eb-4649-88d0-a270a189d073" path="/var/lib/kubelet/pods/48a6bbb6-58eb-4649-88d0-a270a189d073/volumes" Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.650509 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.692434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerStarted","Data":"ecc719b1ae0da94bd5cfb0ed8cb0d8200f9aea70a7edba99fe084d6f34924892"} Jan 29 11:49:05 crc kubenswrapper[4766]: I0129 11:49:05.708636 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-22n2k"] Jan 29 11:49:05 crc kubenswrapper[4766]: W0129 11:49:05.717480 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e7c3a2_1a70_4e43_84db_21832edfdfe1.slice/crio-d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a WatchSource:0}: Error finding container d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a: Status 404 returned error can't find the container with id d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.073221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.153004 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.153239 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="dnsmasq-dns" containerID="cri-o://34c1031d9864c76d7e48c80936da3c90b418e4c5bde5657d1d22a877c2f13a8c" gracePeriod=10 Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.738647 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerStarted","Data":"7d416bce038e327e0ac5e80d025af4b30f549a9927b2ce75a0d83f38a53e6163"} Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.749404 4766 generic.go:334] "Generic (PLEG): container finished" podID="1f8d0686-f650-46fa-a71e-035021e90814" containerID="34c1031d9864c76d7e48c80936da3c90b418e4c5bde5657d1d22a877c2f13a8c" exitCode=0 Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.749797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" event={"ID":"1f8d0686-f650-46fa-a71e-035021e90814","Type":"ContainerDied","Data":"34c1031d9864c76d7e48c80936da3c90b418e4c5bde5657d1d22a877c2f13a8c"} Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.758303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-22n2k" event={"ID":"c8e7c3a2-1a70-4e43-84db-21832edfdfe1","Type":"ContainerStarted","Data":"657c679c3282f0775c066abcb6cd841de1f904e5d86da1996d18f252a03f6653"} Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.759506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-22n2k" event={"ID":"c8e7c3a2-1a70-4e43-84db-21832edfdfe1","Type":"ContainerStarted","Data":"d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a"} Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.783708 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-22n2k" podStartSLOduration=2.783687948 podStartE2EDuration="2.783687948s" podCreationTimestamp="2026-01-29 11:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:49:06.774136496 +0000 UTC m=+1683.886529537" watchObservedRunningTime="2026-01-29 11:49:06.783687948 +0000 UTC m=+1683.896080979" Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.788898 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949651 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949720 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x82c\" (UniqueName: \"kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.949997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0\") pod \"1f8d0686-f650-46fa-a71e-035021e90814\" (UID: \"1f8d0686-f650-46fa-a71e-035021e90814\") " Jan 29 11:49:06 crc kubenswrapper[4766]: I0129 11:49:06.955186 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c" (OuterVolumeSpecName: "kube-api-access-6x82c") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "kube-api-access-6x82c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.009107 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.009115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config" (OuterVolumeSpecName: "config") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.012901 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.012975 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.025501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f8d0686-f650-46fa-a71e-035021e90814" (UID: "1f8d0686-f650-46fa-a71e-035021e90814"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.054721 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.054841 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.054911 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.055493 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.055576 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x82c\" (UniqueName: \"kubernetes.io/projected/1f8d0686-f650-46fa-a71e-035021e90814-kube-api-access-6x82c\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.055639 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8d0686-f650-46fa-a71e-035021e90814-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.771921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" event={"ID":"1f8d0686-f650-46fa-a71e-035021e90814","Type":"ContainerDied","Data":"f1bda9da20f1d31f642bb02d01e92ba818d809ed6daddcd0802e220b5291fee5"} Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.773147 4766 scope.go:117] "RemoveContainer" containerID="34c1031d9864c76d7e48c80936da3c90b418e4c5bde5657d1d22a877c2f13a8c" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.772164 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-7t62v" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.777494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerStarted","Data":"52b9aceb7fcf91e3fea6020d24cd2f5e816f8e95a93472d9f4a950055b986415"} Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.797498 4766 scope.go:117] "RemoveContainer" containerID="6f7ca2a6d21d626cc1b52a2b3fd238045ab39b3e93dfc5fe14f09b4743f77fdc" Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.825993 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:49:07 crc kubenswrapper[4766]: I0129 11:49:07.838984 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-7t62v"] Jan 29 11:49:08 crc kubenswrapper[4766]: I0129 11:49:08.789612 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerStarted","Data":"10926e24d0436cca11a58b6675241744a47408f25fc95907f65bdb78e9c1e372"} Jan 29 11:49:09 crc kubenswrapper[4766]: I0129 11:49:09.238617 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8d0686-f650-46fa-a71e-035021e90814" path="/var/lib/kubelet/pods/1f8d0686-f650-46fa-a71e-035021e90814/volumes" Jan 29 11:49:10 crc kubenswrapper[4766]: I0129 11:49:10.826529 4766 generic.go:334] "Generic (PLEG): container finished" podID="c8e7c3a2-1a70-4e43-84db-21832edfdfe1" containerID="657c679c3282f0775c066abcb6cd841de1f904e5d86da1996d18f252a03f6653" exitCode=0 Jan 29 11:49:10 crc kubenswrapper[4766]: I0129 11:49:10.826637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-22n2k" event={"ID":"c8e7c3a2-1a70-4e43-84db-21832edfdfe1","Type":"ContainerDied","Data":"657c679c3282f0775c066abcb6cd841de1f904e5d86da1996d18f252a03f6653"} Jan 29 11:49:10 crc kubenswrapper[4766]: I0129 11:49:10.830531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerStarted","Data":"2f0fdc25b25c46bbd38ca0f02d558f7c1d71098932ed72e4a7f35d5b8f371421"} Jan 29 11:49:10 crc kubenswrapper[4766]: I0129 11:49:10.830947 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:49:10 crc kubenswrapper[4766]: I0129 11:49:10.877208 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8202641699999997 podStartE2EDuration="6.877184227s" podCreationTimestamp="2026-01-29 11:49:04 +0000 UTC" firstStartedPulling="2026-01-29 11:49:05.645786611 +0000 UTC m=+1682.758179622" lastFinishedPulling="2026-01-29 11:49:09.702706668 +0000 UTC m=+1686.815099679" observedRunningTime="2026-01-29 11:49:10.874588699 +0000 UTC m=+1687.986981710" watchObservedRunningTime="2026-01-29 11:49:10.877184227 +0000 UTC m=+1687.989577238" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.166978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.271673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data\") pod \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.271842 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle\") pod \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.271928 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts\") pod \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.271948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kdjg\" (UniqueName: \"kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg\") pod \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\" (UID: \"c8e7c3a2-1a70-4e43-84db-21832edfdfe1\") " Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.277663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg" (OuterVolumeSpecName: "kube-api-access-2kdjg") pod "c8e7c3a2-1a70-4e43-84db-21832edfdfe1" (UID: "c8e7c3a2-1a70-4e43-84db-21832edfdfe1"). InnerVolumeSpecName "kube-api-access-2kdjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.290556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts" (OuterVolumeSpecName: "scripts") pod "c8e7c3a2-1a70-4e43-84db-21832edfdfe1" (UID: "c8e7c3a2-1a70-4e43-84db-21832edfdfe1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.297050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data" (OuterVolumeSpecName: "config-data") pod "c8e7c3a2-1a70-4e43-84db-21832edfdfe1" (UID: "c8e7c3a2-1a70-4e43-84db-21832edfdfe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.306649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8e7c3a2-1a70-4e43-84db-21832edfdfe1" (UID: "c8e7c3a2-1a70-4e43-84db-21832edfdfe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.373499 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.373526 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.373536 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.373543 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kdjg\" (UniqueName: \"kubernetes.io/projected/c8e7c3a2-1a70-4e43-84db-21832edfdfe1-kube-api-access-2kdjg\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.856310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-22n2k" event={"ID":"c8e7c3a2-1a70-4e43-84db-21832edfdfe1","Type":"ContainerDied","Data":"d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a"} Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.856666 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87e18b1608ffa303d8fc74adf30c0a577d05950b37082c24f66ffcf5ade777a" Jan 29 11:49:12 crc kubenswrapper[4766]: I0129 11:49:12.856760 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-22n2k" Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.064961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.065056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.135832 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.145402 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.145664 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0108fc2a-9d13-4196-bb57-b72855958161" containerName="nova-scheduler-scheduler" containerID="cri-o://2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" gracePeriod=30 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.196136 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.196773 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" containerID="cri-o://13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd" gracePeriod=30 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.197181 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" containerID="cri-o://0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4" gracePeriod=30 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.867916 4766 generic.go:334] "Generic (PLEG): container finished" podID="2d1764d6-be26-4597-b4bf-141727790edf" containerID="13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd" exitCode=143 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.868004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerDied","Data":"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd"} Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.868177 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-log" containerID="cri-o://16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84" gracePeriod=30 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.868262 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-api" containerID="cri-o://3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530" gracePeriod=30 Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.872727 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": EOF" Jan 29 11:49:13 crc kubenswrapper[4766]: I0129 11:49:13.872813 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": EOF" Jan 29 11:49:14 crc kubenswrapper[4766]: E0129 11:49:14.826880 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:49:14 crc kubenswrapper[4766]: E0129 11:49:14.829111 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:49:14 crc kubenswrapper[4766]: E0129 11:49:14.834159 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:49:14 crc kubenswrapper[4766]: E0129 11:49:14.834210 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0108fc2a-9d13-4196-bb57-b72855958161" containerName="nova-scheduler-scheduler" Jan 29 11:49:14 crc kubenswrapper[4766]: I0129 11:49:14.878872 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerID="16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84" exitCode=143 Jan 29 11:49:14 crc kubenswrapper[4766]: I0129 11:49:14.878925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerDied","Data":"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84"} Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.325826 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:36450->10.217.0.187:8775: read: connection reset by peer" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.325851 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:36456->10.217.0.187:8775: read: connection reset by peer" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.775385 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.897819 4766 generic.go:334] "Generic (PLEG): container finished" podID="2d1764d6-be26-4597-b4bf-141727790edf" containerID="0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4" exitCode=0 Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.897864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerDied","Data":"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4"} Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.897889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d1764d6-be26-4597-b4bf-141727790edf","Type":"ContainerDied","Data":"5418adef69569ce39d71851d2916e5a2da795286f448771809f6ef6537fc28d7"} Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.897922 4766 scope.go:117] "RemoveContainer" containerID="0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.898183 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.918896 4766 scope.go:117] "RemoveContainer" containerID="13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.936002 4766 scope.go:117] "RemoveContainer" containerID="0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4" Jan 29 11:49:16 crc kubenswrapper[4766]: E0129 11:49:16.936463 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4\": container with ID starting with 0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4 not found: ID does not exist" containerID="0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.936496 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4"} err="failed to get container status \"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4\": rpc error: code = NotFound desc = could not find container \"0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4\": container with ID starting with 0a7137c744a679565a05d7f69d9ae8043767a6de6370f4ddad9abd8488b3fcb4 not found: ID does not exist" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.936523 4766 scope.go:117] "RemoveContainer" containerID="13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd" Jan 29 11:49:16 crc kubenswrapper[4766]: E0129 11:49:16.936810 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd\": container with ID starting with 13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd not found: ID does not exist" containerID="13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.936853 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd"} err="failed to get container status \"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd\": rpc error: code = NotFound desc = could not find container \"13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd\": container with ID starting with 13fb44589cf74d5e3ad8944c0da984fde5889b5fe0e2e188eafbf5713410f1bd not found: ID does not exist" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.966864 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data\") pod \"2d1764d6-be26-4597-b4bf-141727790edf\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.967094 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle\") pod \"2d1764d6-be26-4597-b4bf-141727790edf\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.967148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk57p\" (UniqueName: \"kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p\") pod \"2d1764d6-be26-4597-b4bf-141727790edf\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.967174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs\") pod \"2d1764d6-be26-4597-b4bf-141727790edf\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.967193 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs\") pod \"2d1764d6-be26-4597-b4bf-141727790edf\" (UID: \"2d1764d6-be26-4597-b4bf-141727790edf\") " Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.967869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs" (OuterVolumeSpecName: "logs") pod "2d1764d6-be26-4597-b4bf-141727790edf" (UID: "2d1764d6-be26-4597-b4bf-141727790edf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.981644 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p" (OuterVolumeSpecName: "kube-api-access-pk57p") pod "2d1764d6-be26-4597-b4bf-141727790edf" (UID: "2d1764d6-be26-4597-b4bf-141727790edf"). InnerVolumeSpecName "kube-api-access-pk57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.993234 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data" (OuterVolumeSpecName: "config-data") pod "2d1764d6-be26-4597-b4bf-141727790edf" (UID: "2d1764d6-be26-4597-b4bf-141727790edf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:16 crc kubenswrapper[4766]: I0129 11:49:16.996668 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1764d6-be26-4597-b4bf-141727790edf" (UID: "2d1764d6-be26-4597-b4bf-141727790edf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.018646 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2d1764d6-be26-4597-b4bf-141727790edf" (UID: "2d1764d6-be26-4597-b4bf-141727790edf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.069544 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.069579 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.069591 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk57p\" (UniqueName: \"kubernetes.io/projected/2d1764d6-be26-4597-b4bf-141727790edf-kube-api-access-pk57p\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.069603 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1764d6-be26-4597-b4bf-141727790edf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.069611 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1764d6-be26-4597-b4bf-141727790edf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.261806 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.261842 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.267785 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:17 crc kubenswrapper[4766]: E0129 11:49:17.268175 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="dnsmasq-dns" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268193 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="dnsmasq-dns" Jan 29 11:49:17 crc kubenswrapper[4766]: E0129 11:49:17.268243 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268253 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" Jan 29 11:49:17 crc kubenswrapper[4766]: E0129 11:49:17.268271 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268279 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" Jan 29 11:49:17 crc kubenswrapper[4766]: E0129 11:49:17.268294 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e7c3a2-1a70-4e43-84db-21832edfdfe1" containerName="nova-manage" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268301 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e7c3a2-1a70-4e43-84db-21832edfdfe1" containerName="nova-manage" Jan 29 11:49:17 crc kubenswrapper[4766]: E0129 11:49:17.268319 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="init" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268326 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="init" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268541 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-metadata" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268554 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8d0686-f650-46fa-a71e-035021e90814" containerName="dnsmasq-dns" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268567 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1764d6-be26-4597-b4bf-141727790edf" containerName="nova-metadata-log" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.268581 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e7c3a2-1a70-4e43-84db-21832edfdfe1" containerName="nova-manage" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.270017 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.276169 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.276328 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.304534 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.374056 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.374134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.374223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdqb\" (UniqueName: \"kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.374338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.374479 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.476505 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.476622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.476674 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.476721 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdqb\" (UniqueName: \"kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.476792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.477821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.481195 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.484931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.493092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.494187 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdqb\" (UniqueName: \"kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb\") pod \"nova-metadata-0\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " pod="openstack/nova-metadata-0" Jan 29 11:49:17 crc kubenswrapper[4766]: I0129 11:49:17.593880 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.029067 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:49:18 crc kubenswrapper[4766]: W0129 11:49:18.039977 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34a8c513_ef7f_49ce_a0d8_2d9351abca2a.slice/crio-d728857aa1e8a906feb7e587b81596585c17a64aa26c46c6abdaf249a2e3dadd WatchSource:0}: Error finding container d728857aa1e8a906feb7e587b81596585c17a64aa26c46c6abdaf249a2e3dadd: Status 404 returned error can't find the container with id d728857aa1e8a906feb7e587b81596585c17a64aa26c46c6abdaf249a2e3dadd Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.834263 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.917233 4766 generic.go:334] "Generic (PLEG): container finished" podID="0108fc2a-9d13-4196-bb57-b72855958161" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" exitCode=0 Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.917605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0108fc2a-9d13-4196-bb57-b72855958161","Type":"ContainerDied","Data":"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915"} Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.917643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0108fc2a-9d13-4196-bb57-b72855958161","Type":"ContainerDied","Data":"3df19e89d28136967ee5c7a302610ad5db307c14c83bbbfcbb9962cbb10ab578"} Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.917666 4766 scope.go:117] "RemoveContainer" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.917800 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.920598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerStarted","Data":"6540ff6aadfe105654848b099a8bef21fce6c3bc83bf18acea31d173e8986a0b"} Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.921402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerStarted","Data":"ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186"} Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.921433 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerStarted","Data":"d728857aa1e8a906feb7e587b81596585c17a64aa26c46c6abdaf249a2e3dadd"} Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.938715 4766 scope.go:117] "RemoveContainer" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" Jan 29 11:49:18 crc kubenswrapper[4766]: E0129 11:49:18.942623 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915\": container with ID starting with 2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915 not found: ID does not exist" containerID="2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.942682 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915"} err="failed to get container status \"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915\": rpc error: code = NotFound desc = could not find container \"2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915\": container with ID starting with 2034d6227e9a728ab1260c8c65a4e7d76c258e42cbb015afa364f3438e79f915 not found: ID does not exist" Jan 29 11:49:18 crc kubenswrapper[4766]: I0129 11:49:18.945330 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.9453140370000002 podStartE2EDuration="1.945314037s" podCreationTimestamp="2026-01-29 11:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:49:18.942955435 +0000 UTC m=+1696.055348446" watchObservedRunningTime="2026-01-29 11:49:18.945314037 +0000 UTC m=+1696.057707048" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.008773 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lhpc\" (UniqueName: \"kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc\") pod \"0108fc2a-9d13-4196-bb57-b72855958161\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.008822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data\") pod \"0108fc2a-9d13-4196-bb57-b72855958161\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.009094 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle\") pod \"0108fc2a-9d13-4196-bb57-b72855958161\" (UID: \"0108fc2a-9d13-4196-bb57-b72855958161\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.013055 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc" (OuterVolumeSpecName: "kube-api-access-8lhpc") pod "0108fc2a-9d13-4196-bb57-b72855958161" (UID: "0108fc2a-9d13-4196-bb57-b72855958161"). InnerVolumeSpecName "kube-api-access-8lhpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.039615 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0108fc2a-9d13-4196-bb57-b72855958161" (UID: "0108fc2a-9d13-4196-bb57-b72855958161"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.039643 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data" (OuterVolumeSpecName: "config-data") pod "0108fc2a-9d13-4196-bb57-b72855958161" (UID: "0108fc2a-9d13-4196-bb57-b72855958161"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.111244 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.111280 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lhpc\" (UniqueName: \"kubernetes.io/projected/0108fc2a-9d13-4196-bb57-b72855958161-kube-api-access-8lhpc\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.111292 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0108fc2a-9d13-4196-bb57-b72855958161-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.224046 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:49:19 crc kubenswrapper[4766]: E0129 11:49:19.224261 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.238273 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1764d6-be26-4597-b4bf-141727790edf" path="/var/lib/kubelet/pods/2d1764d6-be26-4597-b4bf-141727790edf/volumes" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.253674 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.266476 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.278028 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:19 crc kubenswrapper[4766]: E0129 11:49:19.278522 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0108fc2a-9d13-4196-bb57-b72855958161" containerName="nova-scheduler-scheduler" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.278542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0108fc2a-9d13-4196-bb57-b72855958161" containerName="nova-scheduler-scheduler" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.278776 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0108fc2a-9d13-4196-bb57-b72855958161" containerName="nova-scheduler-scheduler" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.279402 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.281789 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.285761 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.416158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.416305 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.416765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.518607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.518783 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.518862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.524758 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.525220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.538449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh\") pod \"nova-scheduler-0\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.660458 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.758443 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22t45\" (UniqueName: \"kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926498 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926604 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926636 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.926742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data\") pod \"bf39f22b-4140-45f9-8d54-65ad832c04dc\" (UID: \"bf39f22b-4140-45f9-8d54-65ad832c04dc\") " Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.928062 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs" (OuterVolumeSpecName: "logs") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.932157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45" (OuterVolumeSpecName: "kube-api-access-22t45") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "kube-api-access-22t45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.939523 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerID="3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530" exitCode=0 Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.940833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerDied","Data":"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530"} Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.940866 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf39f22b-4140-45f9-8d54-65ad832c04dc","Type":"ContainerDied","Data":"75ddd87af93efd31c1d72913b6e8e32e08443dd8e0c56f71d6bfedbd6b4a0404"} Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.940892 4766 scope.go:117] "RemoveContainer" containerID="3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.941004 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.969168 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.970240 4766 scope.go:117] "RemoveContainer" containerID="16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.974109 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.989044 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data" (OuterVolumeSpecName: "config-data") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.989221 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf39f22b-4140-45f9-8d54-65ad832c04dc" (UID: "bf39f22b-4140-45f9-8d54-65ad832c04dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.990034 4766 scope.go:117] "RemoveContainer" containerID="3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530" Jan 29 11:49:19 crc kubenswrapper[4766]: E0129 11:49:19.990496 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530\": container with ID starting with 3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530 not found: ID does not exist" containerID="3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.990538 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530"} err="failed to get container status \"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530\": rpc error: code = NotFound desc = could not find container \"3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530\": container with ID starting with 3a5d24ec17a725f52aca319216448647a8aeaf15516ce756b4eebb86e4ce4530 not found: ID does not exist" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.990565 4766 scope.go:117] "RemoveContainer" containerID="16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84" Jan 29 11:49:19 crc kubenswrapper[4766]: E0129 11:49:19.991064 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84\": container with ID starting with 16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84 not found: ID does not exist" containerID="16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84" Jan 29 11:49:19 crc kubenswrapper[4766]: I0129 11:49:19.991155 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84"} err="failed to get container status \"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84\": rpc error: code = NotFound desc = could not find container \"16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84\": container with ID starting with 16868a5c3234ec77813550ffac81d5e376c1c5e9ecc0b8eb8c2ccad6b527ff84 not found: ID does not exist" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032036 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22t45\" (UniqueName: \"kubernetes.io/projected/bf39f22b-4140-45f9-8d54-65ad832c04dc-kube-api-access-22t45\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032075 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf39f22b-4140-45f9-8d54-65ad832c04dc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032087 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032098 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032110 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.032120 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf39f22b-4140-45f9-8d54-65ad832c04dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:20 crc kubenswrapper[4766]: W0129 11:49:20.094811 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d99f4d4_0dab_45de_ac76_7a0fa820c353.slice/crio-8ee6be214b93301929773558e83d4a1dad288de13179e2d0f6044a33f74ce1bb WatchSource:0}: Error finding container 8ee6be214b93301929773558e83d4a1dad288de13179e2d0f6044a33f74ce1bb: Status 404 returned error can't find the container with id 8ee6be214b93301929773558e83d4a1dad288de13179e2d0f6044a33f74ce1bb Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.096634 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.279895 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.290118 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.301020 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:20 crc kubenswrapper[4766]: E0129 11:49:20.301450 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-api" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.301467 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-api" Jan 29 11:49:20 crc kubenswrapper[4766]: E0129 11:49:20.301506 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-log" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.301513 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-log" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.301665 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-log" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.301684 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" containerName="nova-api-api" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.302618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.307236 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.307328 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.309901 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.320104 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.438579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.438975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjng\" (UniqueName: \"kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.439105 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.439220 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.439354 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.439614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541595 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjng\" (UniqueName: \"kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541652 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541674 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541817 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.541850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.542620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.547102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.547126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.547397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.551786 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.559656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjng\" (UniqueName: \"kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng\") pod \"nova-api-0\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.621367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.953908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d99f4d4-0dab-45de-ac76-7a0fa820c353","Type":"ContainerStarted","Data":"5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856"} Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.954255 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d99f4d4-0dab-45de-ac76-7a0fa820c353","Type":"ContainerStarted","Data":"8ee6be214b93301929773558e83d4a1dad288de13179e2d0f6044a33f74ce1bb"} Jan 29 11:49:20 crc kubenswrapper[4766]: I0129 11:49:20.974427 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.974390241 podStartE2EDuration="1.974390241s" podCreationTimestamp="2026-01-29 11:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:49:20.971534675 +0000 UTC m=+1698.083927706" watchObservedRunningTime="2026-01-29 11:49:20.974390241 +0000 UTC m=+1698.086783272" Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.056859 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:49:21 crc kubenswrapper[4766]: W0129 11:49:21.057783 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22118bb6_3dd9_41d5_8215_d8e4679828ba.slice/crio-12cd75e180e319b7141c9e38bdac39b69a6314db03a5d8b8b1a00ad19283783b WatchSource:0}: Error finding container 12cd75e180e319b7141c9e38bdac39b69a6314db03a5d8b8b1a00ad19283783b: Status 404 returned error can't find the container with id 12cd75e180e319b7141c9e38bdac39b69a6314db03a5d8b8b1a00ad19283783b Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.237691 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0108fc2a-9d13-4196-bb57-b72855958161" path="/var/lib/kubelet/pods/0108fc2a-9d13-4196-bb57-b72855958161/volumes" Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.238472 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf39f22b-4140-45f9-8d54-65ad832c04dc" path="/var/lib/kubelet/pods/bf39f22b-4140-45f9-8d54-65ad832c04dc/volumes" Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.965024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerStarted","Data":"b58d99a157c04c8d7ed139d5f25b0d83bc497ae3ae246723524e597810fc69f5"} Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.965403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerStarted","Data":"5eccd5688bd3f21bd7b9ab6b4fa9bc25010dd2b9cb4c6d665db537e3ffb66b72"} Jan 29 11:49:21 crc kubenswrapper[4766]: I0129 11:49:21.965450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerStarted","Data":"12cd75e180e319b7141c9e38bdac39b69a6314db03a5d8b8b1a00ad19283783b"} Jan 29 11:49:22 crc kubenswrapper[4766]: I0129 11:49:22.001143 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.001118719 podStartE2EDuration="2.001118719s" podCreationTimestamp="2026-01-29 11:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:49:21.99317837 +0000 UTC m=+1699.105571401" watchObservedRunningTime="2026-01-29 11:49:22.001118719 +0000 UTC m=+1699.113511740" Jan 29 11:49:22 crc kubenswrapper[4766]: I0129 11:49:22.593974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:49:22 crc kubenswrapper[4766]: I0129 11:49:22.595287 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:49:24 crc kubenswrapper[4766]: I0129 11:49:24.661238 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:49:27 crc kubenswrapper[4766]: I0129 11:49:27.595171 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:49:27 crc kubenswrapper[4766]: I0129 11:49:27.595955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:49:28 crc kubenswrapper[4766]: I0129 11:49:28.603313 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:49:28 crc kubenswrapper[4766]: I0129 11:49:28.603379 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.553205 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.555047 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.585586 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.661263 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.689696 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.713771 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.713930 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.714183 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvtt\" (UniqueName: \"kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.815766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqvtt\" (UniqueName: \"kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.815922 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.815973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.816516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.816611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.843902 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqvtt\" (UniqueName: \"kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt\") pod \"redhat-operators-qnpn4\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:29 crc kubenswrapper[4766]: I0129 11:49:29.877677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:30 crc kubenswrapper[4766]: I0129 11:49:30.087111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:49:30 crc kubenswrapper[4766]: W0129 11:49:30.384403 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5ecbd4_5954_4895_aa6f_49aaa73bef24.slice/crio-6141a8fea801c07c9d8b74166f2b7ff37f6f09defe9daaad13fc90663081d5ee WatchSource:0}: Error finding container 6141a8fea801c07c9d8b74166f2b7ff37f6f09defe9daaad13fc90663081d5ee: Status 404 returned error can't find the container with id 6141a8fea801c07c9d8b74166f2b7ff37f6f09defe9daaad13fc90663081d5ee Jan 29 11:49:30 crc kubenswrapper[4766]: I0129 11:49:30.389134 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:30 crc kubenswrapper[4766]: I0129 11:49:30.622759 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:49:30 crc kubenswrapper[4766]: I0129 11:49:30.623106 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:49:31 crc kubenswrapper[4766]: I0129 11:49:31.058166 4766 generic.go:334] "Generic (PLEG): container finished" podID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerID="2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58" exitCode=0 Jan 29 11:49:31 crc kubenswrapper[4766]: I0129 11:49:31.058293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerDied","Data":"2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58"} Jan 29 11:49:31 crc kubenswrapper[4766]: I0129 11:49:31.058363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerStarted","Data":"6141a8fea801c07c9d8b74166f2b7ff37f6f09defe9daaad13fc90663081d5ee"} Jan 29 11:49:31 crc kubenswrapper[4766]: I0129 11:49:31.635614 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:49:31 crc kubenswrapper[4766]: I0129 11:49:31.635620 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:49:33 crc kubenswrapper[4766]: I0129 11:49:33.076811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerStarted","Data":"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1"} Jan 29 11:49:33 crc kubenswrapper[4766]: I0129 11:49:33.224308 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:49:33 crc kubenswrapper[4766]: E0129 11:49:33.224595 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:49:35 crc kubenswrapper[4766]: I0129 11:49:35.098303 4766 generic.go:334] "Generic (PLEG): container finished" podID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerID="3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1" exitCode=0 Jan 29 11:49:35 crc kubenswrapper[4766]: I0129 11:49:35.098377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerDied","Data":"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1"} Jan 29 11:49:35 crc kubenswrapper[4766]: I0129 11:49:35.121914 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:49:37 crc kubenswrapper[4766]: I0129 11:49:37.130318 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerStarted","Data":"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584"} Jan 29 11:49:37 crc kubenswrapper[4766]: I0129 11:49:37.172293 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnpn4" podStartSLOduration=3.326855241 podStartE2EDuration="8.172272262s" podCreationTimestamp="2026-01-29 11:49:29 +0000 UTC" firstStartedPulling="2026-01-29 11:49:31.062847824 +0000 UTC m=+1708.175240835" lastFinishedPulling="2026-01-29 11:49:35.908264835 +0000 UTC m=+1713.020657856" observedRunningTime="2026-01-29 11:49:37.171115741 +0000 UTC m=+1714.283508762" watchObservedRunningTime="2026-01-29 11:49:37.172272262 +0000 UTC m=+1714.284665273" Jan 29 11:49:37 crc kubenswrapper[4766]: I0129 11:49:37.600674 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:49:37 crc kubenswrapper[4766]: I0129 11:49:37.604476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:49:37 crc kubenswrapper[4766]: I0129 11:49:37.605713 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:49:38 crc kubenswrapper[4766]: I0129 11:49:38.145376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:49:39 crc kubenswrapper[4766]: I0129 11:49:39.879484 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:39 crc kubenswrapper[4766]: I0129 11:49:39.880934 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:40 crc kubenswrapper[4766]: I0129 11:49:40.636944 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:49:40 crc kubenswrapper[4766]: I0129 11:49:40.637638 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:49:40 crc kubenswrapper[4766]: I0129 11:49:40.639281 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:49:40 crc kubenswrapper[4766]: I0129 11:49:40.646018 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:49:40 crc kubenswrapper[4766]: I0129 11:49:40.932169 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnpn4" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="registry-server" probeResult="failure" output=< Jan 29 11:49:40 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 11:49:40 crc kubenswrapper[4766]: > Jan 29 11:49:41 crc kubenswrapper[4766]: I0129 11:49:41.164787 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:49:41 crc kubenswrapper[4766]: I0129 11:49:41.173364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:49:47 crc kubenswrapper[4766]: I0129 11:49:47.224761 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:49:47 crc kubenswrapper[4766]: E0129 11:49:47.225559 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:49:49 crc kubenswrapper[4766]: I0129 11:49:49.938284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:50 crc kubenswrapper[4766]: I0129 11:49:50.003574 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:50 crc kubenswrapper[4766]: I0129 11:49:50.171566 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.247803 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnpn4" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="registry-server" containerID="cri-o://84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584" gracePeriod=2 Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.751354 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.947555 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqvtt\" (UniqueName: \"kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt\") pod \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.948144 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities\") pod \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.948182 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content\") pod \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\" (UID: \"af5ecbd4-5954-4895-aa6f-49aaa73bef24\") " Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.948896 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities" (OuterVolumeSpecName: "utilities") pod "af5ecbd4-5954-4895-aa6f-49aaa73bef24" (UID: "af5ecbd4-5954-4895-aa6f-49aaa73bef24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:51 crc kubenswrapper[4766]: I0129 11:49:51.956046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt" (OuterVolumeSpecName: "kube-api-access-bqvtt") pod "af5ecbd4-5954-4895-aa6f-49aaa73bef24" (UID: "af5ecbd4-5954-4895-aa6f-49aaa73bef24"). InnerVolumeSpecName "kube-api-access-bqvtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.050731 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqvtt\" (UniqueName: \"kubernetes.io/projected/af5ecbd4-5954-4895-aa6f-49aaa73bef24-kube-api-access-bqvtt\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.050764 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.062043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af5ecbd4-5954-4895-aa6f-49aaa73bef24" (UID: "af5ecbd4-5954-4895-aa6f-49aaa73bef24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.152728 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ecbd4-5954-4895-aa6f-49aaa73bef24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.266573 4766 generic.go:334] "Generic (PLEG): container finished" podID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerID="84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584" exitCode=0 Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.266615 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerDied","Data":"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584"} Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.266650 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnpn4" event={"ID":"af5ecbd4-5954-4895-aa6f-49aaa73bef24","Type":"ContainerDied","Data":"6141a8fea801c07c9d8b74166f2b7ff37f6f09defe9daaad13fc90663081d5ee"} Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.266666 4766 scope.go:117] "RemoveContainer" containerID="84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.266764 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnpn4" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.290407 4766 scope.go:117] "RemoveContainer" containerID="3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.303352 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.313282 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnpn4"] Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.339529 4766 scope.go:117] "RemoveContainer" containerID="2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.365379 4766 scope.go:117] "RemoveContainer" containerID="84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584" Jan 29 11:49:52 crc kubenswrapper[4766]: E0129 11:49:52.367342 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584\": container with ID starting with 84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584 not found: ID does not exist" containerID="84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.367405 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584"} err="failed to get container status \"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584\": rpc error: code = NotFound desc = could not find container \"84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584\": container with ID starting with 84e0c77ca15fdc0eb78eefad809f06f39c7412fddc76045f14c4b802f78cd584 not found: ID does not exist" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.367461 4766 scope.go:117] "RemoveContainer" containerID="3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1" Jan 29 11:49:52 crc kubenswrapper[4766]: E0129 11:49:52.367870 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1\": container with ID starting with 3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1 not found: ID does not exist" containerID="3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.367888 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1"} err="failed to get container status \"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1\": rpc error: code = NotFound desc = could not find container \"3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1\": container with ID starting with 3336690bb5d234bcd958e3de74ea2e12942d47f5e785b1844aa23cbb4ea3e0a1 not found: ID does not exist" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.367902 4766 scope.go:117] "RemoveContainer" containerID="2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58" Jan 29 11:49:52 crc kubenswrapper[4766]: E0129 11:49:52.368175 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58\": container with ID starting with 2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58 not found: ID does not exist" containerID="2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58" Jan 29 11:49:52 crc kubenswrapper[4766]: I0129 11:49:52.368205 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58"} err="failed to get container status \"2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58\": rpc error: code = NotFound desc = could not find container \"2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58\": container with ID starting with 2ace15c92f16554051b48b0d697802048b5db65dcc488525715b32dc0ffa0a58 not found: ID does not exist" Jan 29 11:49:53 crc kubenswrapper[4766]: I0129 11:49:53.237104 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" path="/var/lib/kubelet/pods/af5ecbd4-5954-4895-aa6f-49aaa73bef24/volumes" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.765319 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.766138 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="f484f11d-a20d-4d69-9619-d5f8df022bd7" containerName="openstackclient" containerID="cri-o://175dd17b34bc65355780323019254f2f358e93d247a0f42a84345efd3579c3e2" gracePeriod=2 Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.784659 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.931611 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:49:57 crc kubenswrapper[4766]: E0129 11:49:57.932430 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="registry-server" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.932541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="registry-server" Jan 29 11:49:57 crc kubenswrapper[4766]: E0129 11:49:57.932612 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f484f11d-a20d-4d69-9619-d5f8df022bd7" containerName="openstackclient" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.932661 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f484f11d-a20d-4d69-9619-d5f8df022bd7" containerName="openstackclient" Jan 29 11:49:57 crc kubenswrapper[4766]: E0129 11:49:57.932735 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="extract-content" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.932784 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="extract-content" Jan 29 11:49:57 crc kubenswrapper[4766]: E0129 11:49:57.932846 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="extract-utilities" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.932896 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="extract-utilities" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.933127 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5ecbd4-5954-4895-aa6f-49aaa73bef24" containerName="registry-server" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.933202 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f484f11d-a20d-4d69-9619-d5f8df022bd7" containerName="openstackclient" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.933911 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.942932 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.947728 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.970430 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dwz6q"] Jan 29 11:49:57 crc kubenswrapper[4766]: I0129 11:49:57.994488 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dwz6q"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.001102 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.001191 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lh6n\" (UniqueName: \"kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.016484 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2b54-account-create-update-p9795"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.034375 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2b54-account-create-update-p9795"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.079848 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.081567 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.084792 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.100485 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.104884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7mr\" (UniqueName: \"kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.104956 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.105014 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.105071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lh6n\" (UniqueName: \"kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.106005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.121604 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.202898 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.208601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.208975 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7mr\" (UniqueName: \"kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.210066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.223020 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lh6n\" (UniqueName: \"kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n\") pod \"root-account-create-update-lxhcz\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.229463 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.229712 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-pmh5k" podUID="defb6fef-3db5-4137-a250-9e20054fe48a" containerName="openstack-network-exporter" containerID="cri-o://a5c7f6df0612c9199b8577076ad9f5fb9025e0093bf6d0900000b45ecd6a38b9" gracePeriod=30 Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.264304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxhcz" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.291003 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.292288 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.304021 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.310768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.310882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kdc\" (UniqueName: \"kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.328081 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.371546 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.496712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kdc\" (UniqueName: \"kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.497173 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.498118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.513017 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4e41-account-create-update-p9lr6"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.526164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7mr\" (UniqueName: \"kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr\") pod \"cinder-2b54-account-create-update-fxkwh\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.587534 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4e41-account-create-update-p9lr6"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.635684 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.637279 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.642605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kdc\" (UniqueName: \"kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc\") pod \"glance-4e41-account-create-update-rkj42\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: E0129 11:49:58.657685 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 11:49:58 crc kubenswrapper[4766]: E0129 11:49:58.657752 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data podName:b77b577e-b980-46fb-945a-a0b57e3bdc17 nodeName:}" failed. No retries permitted until 2026-01-29 11:49:59.157731686 +0000 UTC m=+1736.270124697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data") pod "rabbitmq-server-0" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17") : configmap "rabbitmq-config-data" not found Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.679076 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.710439 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.728386 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.728707 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="ovn-northd" containerID="cri-o://587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" gracePeriod=30 Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.728858 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="openstack-network-exporter" containerID="cri-o://7859ad51abd1137169edd5bae5e4945e15a36ed89a66747e6ac27ac9476ded8b" gracePeriod=30 Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.730356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsd2g\" (UniqueName: \"kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.730440 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.826245 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.835935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsd2g\" (UniqueName: \"kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.835995 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.836721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.905930 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb14-account-create-update-ndzx8"] Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.945785 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsd2g\" (UniqueName: \"kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g\") pod \"placement-fb14-account-create-update-5hrp9\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.946666 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:49:58 crc kubenswrapper[4766]: I0129 11:49:58.951531 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fb14-account-create-update-ndzx8"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.000737 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c4c0-account-create-update-lrrxn"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.023869 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c4c0-account-create-update-lrrxn"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.025542 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.052637 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ba9c-account-create-update-fvdrk"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.073755 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ba9c-account-create-update-fvdrk"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.127529 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.166502 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wc899"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.223514 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wc899"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.224997 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.225323 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.247221 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.247305 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data podName:b77b577e-b980-46fb-945a-a0b57e3bdc17 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:00.247290492 +0000 UTC m=+1737.359683503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data") pod "rabbitmq-server-0" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17") : configmap "rabbitmq-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.282395 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212d3fdc-eac2-4868-a017-878a6f0d3cea" path="/var/lib/kubelet/pods/212d3fdc-eac2-4868-a017-878a6f0d3cea/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.283280 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80170965-93b1-41a5-8a4b-e0e3c87beda4" path="/var/lib/kubelet/pods/80170965-93b1-41a5-8a4b-e0e3c87beda4/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.284099 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf714f89-4e2e-43cb-9cbb-427c6270e65e" path="/var/lib/kubelet/pods/bf714f89-4e2e-43cb-9cbb-427c6270e65e/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.285015 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d775c8-398d-45dd-aea7-2c2bc050e040" path="/var/lib/kubelet/pods/e2d775c8-398d-45dd-aea7-2c2bc050e040/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.287098 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee506bd6-9b63-4ad3-8499-3802ab144d3e" path="/var/lib/kubelet/pods/ee506bd6-9b63-4ad3-8499-3802ab144d3e/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.287613 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dc4d47-9e72-4287-be4f-176017f5c41a" path="/var/lib/kubelet/pods/f5dc4d47-9e72-4287-be4f-176017f5c41a/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.288083 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd606a73-05c8-4c8f-b4f2-281a9f308e43" path="/var/lib/kubelet/pods/fd606a73-05c8-4c8f-b4f2-281a9f308e43/volumes" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.289086 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4bqsv"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.313627 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4bqsv"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.328502 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.329141 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="openstack-network-exporter" containerID="cri-o://67a037dad4e638172b7099712b789cf884049f6cc0a4510c0636f1f4a13a2e4a" gracePeriod=300 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.357087 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5c55-account-create-update-scf58"] Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.358391 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.358448 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data podName:ace2f6ec-cf57-4742-82e9-e13fd230bb69 nodeName:}" failed. No retries permitted until 2026-01-29 11:49:59.858433479 +0000 UTC m=+1736.970826490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data") pod "rabbitmq-cell1-server-0" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69") : configmap "rabbitmq-cell1-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.414264 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mhmgq"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.437764 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5c55-account-create-update-scf58"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.454254 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pmh5k_defb6fef-3db5-4137-a250-9e20054fe48a/openstack-network-exporter/0.log" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.454313 4766 generic.go:334] "Generic (PLEG): container finished" podID="defb6fef-3db5-4137-a250-9e20054fe48a" containerID="a5c7f6df0612c9199b8577076ad9f5fb9025e0093bf6d0900000b45ecd6a38b9" exitCode=2 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.454504 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mhmgq"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.454539 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pmh5k" event={"ID":"defb6fef-3db5-4137-a250-9e20054fe48a","Type":"ContainerDied","Data":"a5c7f6df0612c9199b8577076ad9f5fb9025e0093bf6d0900000b45ecd6a38b9"} Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.471916 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.472707 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="openstack-network-exporter" containerID="cri-o://244347370c5e70ae119750458317e88ba78b6c9c01068f9f4942f415f38e3b6c" gracePeriod=300 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.483920 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tdq67"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.502966 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tdq67"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.508653 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerStarted","Data":"125c849041b9198229cb48d4edea827510d588b563f18cc0b1fa1075043e99f2"} Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.520884 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4mnfs"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.535004 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4mnfs"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.559782 4766 generic.go:334] "Generic (PLEG): container finished" podID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerID="7859ad51abd1137169edd5bae5e4945e15a36ed89a66747e6ac27ac9476ded8b" exitCode=2 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.559830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerDied","Data":"7859ad51abd1137169edd5bae5e4945e15a36ed89a66747e6ac27ac9476ded8b"} Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.571608 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-skcrx"] Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.582201 4766 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-5kz4c" message=< Jan 29 11:49:59 crc kubenswrapper[4766]: Exiting ovn-controller (1) [ OK ] Jan 29 11:49:59 crc kubenswrapper[4766]: > Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.582229 4766 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-5kz4c" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" containerID="cri-o://36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201" Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.582260 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-5kz4c" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" containerID="cri-o://36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201" gracePeriod=29 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.602899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a5ac-account-create-update-k9rgg"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.646489 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.646765 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="dnsmasq-dns" containerID="cri-o://4925aa28bf4f33e9f328ae00bd14e1da8d9f6b2c7f29cdfaaafce38d8720b42b" gracePeriod=10 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.738013 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.773234 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="ovsdbserver-sb" containerID="cri-o://aa69d365dd52beaeca5420f2ec0d4a643b3863f2b22c8b2c4958a5c03855b17f" gracePeriod=300 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.863159 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a5ac-account-create-update-k9rgg"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.918110 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-skcrx"] Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.928840 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: E0129 11:49:59.928935 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data podName:ace2f6ec-cf57-4742-82e9-e13fd230bb69 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:00.928912572 +0000 UTC m=+1738.041305583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data") pod "rabbitmq-cell1-server-0" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69") : configmap "rabbitmq-cell1-config-data" not found Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.964649 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lmqls"] Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.965394 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="ovsdbserver-nb" containerID="cri-o://8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" gracePeriod=300 Jan 29 11:49:59 crc kubenswrapper[4766]: I0129 11:49:59.980480 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lmqls"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.013837 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pmh5k_defb6fef-3db5-4137-a250-9e20054fe48a/openstack-network-exporter/0.log" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.013925 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.024798 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-22n2k"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.067045 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-22n2k"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.093537 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.093873 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api-log" containerID="cri-o://a3a9fbf48c090c048092e1c49334325b7802b39586ef26e6e58e4213960da8d3" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.094377 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api" containerID="cri-o://952afb9816e99acbe37c8a9ddc03d82aee8becf7ea80015a22c126ca32f58ff9" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.126025 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-server" containerID="cri-o://0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127428 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="swift-recon-cron" containerID="cri-o://8e68677dd185d8414adc8711bc359046fd3ba61c227101b176907d577a947636" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127472 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="rsync" containerID="cri-o://b5a310208e51de3a1f1085a299d696e0c092c1ac6a305a7368d95a466bfff254" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-expirer" containerID="cri-o://2395dfbbbded053ffa0416aaf69a1b9af00ea806ccc677235dd81f9d3e9af4d0" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127534 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-updater" containerID="cri-o://f4a7df4ad8946a4ec821983033924fd3dd8e163b9568817e4bde1fb325d0beeb" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127560 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-auditor" containerID="cri-o://c34078361e9f1ca8e71c227ebd7d7091b558e6c3354bb51e22b1e1374342fcd1" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127590 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-replicator" containerID="cri-o://257558dd443e4fbb0f93499c81b54107c340b1424e2baeb386f3a283efa8bdc7" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127621 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-server" containerID="cri-o://d7be2c0fabfadf12060358b5738adc72343b29f57c77135d1af1a5ae1e4e2863" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127650 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-updater" containerID="cri-o://8d94f2b31596b3ca99397133e0199e33b8ac9312697c345fc4b87be8aeecd36f" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127682 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-auditor" containerID="cri-o://30f33e794206b04a93fc0f4e715cfe43660a23a19676c6e5b3df502d2e869f1b" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127711 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-replicator" containerID="cri-o://e0305b2958f6c65d81b49c58ff14fade2e99341839d85bcc73aa51a8cd5a3041" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127750 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-server" containerID="cri-o://7c33d37f74f55ffa51cd765a4b94d2af021150d55ef7e15a523b325c621e7d0a" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127782 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-reaper" containerID="cri-o://0025dd537da59d77d5c32f5643222b1c209187a4cb4389da45a65ec542521294" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.127809 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-auditor" containerID="cri-o://aff768bf5b19009768658ec1f0fc18767e8949cd575199e18d90c8f182040d28" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.128154 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-replicator" containerID="cri-o://7f8c5aeba92943edcfc2aff61715cdbbc5630ac266d0729c5b84d3f25837100d" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133186 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133244 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133371 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.133462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsdd9\" (UniqueName: \"kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9\") pod \"defb6fef-3db5-4137-a250-9e20054fe48a\" (UID: \"defb6fef-3db5-4137-a250-9e20054fe48a\") " Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.138854 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.139590 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.139944 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.142814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config" (OuterVolumeSpecName: "config") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.153708 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9" (OuterVolumeSpecName: "kube-api-access-dsdd9") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "kube-api-access-dsdd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.201127 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.202021 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="cinder-scheduler" containerID="cri-o://14e4b623cc33e1869a58abf1c35db16e3909d3d2a092250a9f93c7d83fa741ec" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.203771 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="probe" containerID="cri-o://0a988c9f46e3a70b4049e9abe888a41821aad0a9143a7ab9d80be40f836fe69e" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.244357 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsdd9\" (UniqueName: \"kubernetes.io/projected/defb6fef-3db5-4137-a250-9e20054fe48a-kube-api-access-dsdd9\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.244398 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/defb6fef-3db5-4137-a250-9e20054fe48a-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.244429 4766 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/defb6fef-3db5-4137-a250-9e20054fe48a-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.265030 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.359753 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.360620 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.360663 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data podName:b77b577e-b980-46fb-945a-a0b57e3bdc17 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:02.360648322 +0000 UTC m=+1739.473041323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data") pod "rabbitmq-server-0" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17") : configmap "rabbitmq-config-data" not found Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.445625 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.445933 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ff4655576-rzc26" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-log" containerID="cri-o://8b8626b814bdc9ebbe0eb6d6c45744653225b6c9c53cd0a3325216664d30e4d6" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.446509 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ff4655576-rzc26" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-api" containerID="cri-o://6b47eab1e9e54a967ffb6a8dbb5d22f27c753e7cad3329b3e2436f5c3898c7c9" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.516177 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.516895 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d49f9cb5-5nhnk" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-api" containerID="cri-o://a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.517496 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d49f9cb5-5nhnk" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-httpd" containerID="cri-o://d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.530117 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.531047 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-log" containerID="cri-o://5fa3e2236ec63b27db194527bb716839b21f9cea6f579d3762f4f41dced8ddd1" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.531706 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-httpd" containerID="cri-o://7757bdf84a1a20ce16552c3e15762e105f6f1602c859ce9e79be4ff4bbd3a36d" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.574029 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.574376 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-log" containerID="cri-o://5f484b8e00e79b044b603b23bc146e1024f8a58609cafd703ef2e0617e674445" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.574929 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-httpd" containerID="cri-o://a5870626b08c5ff65aad3d62a1002578aa41b4503406b749e77a94df8bdaa959" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.584850 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "defb6fef-3db5-4137-a250-9e20054fe48a" (UID: "defb6fef-3db5-4137-a250-9e20054fe48a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.586116 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.614837 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.615047 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerName="nova-scheduler-scheduler" containerID="cri-o://5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.628511 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vh7hf"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.642505 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vh7hf"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.659580 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.659898 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener-log" containerID="cri-o://d3a5a4ab1f26a3b0ec0c993790441804f0c92c85eb73ffb26bede23ff956c81f" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.660430 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener" containerID="cri-o://a1a9a79ccf506d864099d855f636208585fac69df49e6476e65b408773389289" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.672554 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/defb6fef-3db5-4137-a250-9e20054fe48a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.685108 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.685304 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67f655d9dc-95fxw" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker-log" containerID="cri-o://679c7206ac2f82b82e8b1a3ca3a64bf5f1d0710a5dba85f183e20c4390695423" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.685942 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67f655d9dc-95fxw" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker" containerID="cri-o://a5c49449e84d148200b6f0a47a8ec23b2f77e9135152810c5d0bbabc622713e8" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.687914 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0c26286-7e5f-4610-967b-408ad3916918" containerID="a3a9fbf48c090c048092e1c49334325b7802b39586ef26e6e58e4213960da8d3" exitCode=143 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.687977 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerDied","Data":"a3a9fbf48c090c048092e1c49334325b7802b39586ef26e6e58e4213960da8d3"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.720587 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_51f2b06e-748d-4bb1-b7e7-f5cd039a532d/ovsdbserver-nb/0.log" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.720628 4766 generic.go:334] "Generic (PLEG): container finished" podID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerID="244347370c5e70ae119750458317e88ba78b6c9c01068f9f4942f415f38e3b6c" exitCode=2 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.720644 4766 generic.go:334] "Generic (PLEG): container finished" podID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerID="8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" exitCode=143 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.720716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerDied","Data":"244347370c5e70ae119750458317e88ba78b6c9c01068f9f4942f415f38e3b6c"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.720741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerDied","Data":"8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.728887 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.729116 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-log" containerID="cri-o://5eccd5688bd3f21bd7b9ab6b4fa9bc25010dd2b9cb4c6d665db537e3ffb66b72" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.729543 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-api" containerID="cri-o://b58d99a157c04c8d7ed139d5f25b0d83bc497ae3ae246723524e597810fc69f5" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.732283 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pmh5k_defb6fef-3db5-4137-a250-9e20054fe48a/openstack-network-exporter/0.log" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.732352 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pmh5k" event={"ID":"defb6fef-3db5-4137-a250-9e20054fe48a","Type":"ContainerDied","Data":"d910eaa24b5ea235f1db7998854c8bae53c27e04c7367a34c8b3098f58423927"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.732383 4766 scope.go:117] "RemoveContainer" containerID="a5c7f6df0612c9199b8577076ad9f5fb9025e0093bf6d0900000b45ecd6a38b9" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.732509 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pmh5k" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.766349 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerStarted","Data":"d679005363f3d7068f40510c318a9543fe5b63ce1d7e7cc636a9215f217d7925"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.767560 4766 scope.go:117] "RemoveContainer" containerID="d679005363f3d7068f40510c318a9543fe5b63ce1d7e7cc636a9215f217d7925" Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.787746 4766 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 11:50:00 crc kubenswrapper[4766]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 11:50:00 crc kubenswrapper[4766]: + source /usr/local/bin/container-scripts/functions Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNBridge=br-int Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNRemote=tcp:localhost:6642 Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNEncapType=geneve Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNAvailabilityZones= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ EnableChassisAsGateway=true Jan 29 11:50:00 crc kubenswrapper[4766]: ++ PhysicalNetworks= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNHostName= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 11:50:00 crc kubenswrapper[4766]: ++ ovs_dir=/var/lib/openvswitch Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 11:50:00 crc kubenswrapper[4766]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + cleanup_ovsdb_server_semaphore Jan 29 11:50:00 crc kubenswrapper[4766]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 11:50:00 crc kubenswrapper[4766]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-2gh2n" message=< Jan 29 11:50:00 crc kubenswrapper[4766]: Exiting ovsdb-server (5) [ OK ] Jan 29 11:50:00 crc kubenswrapper[4766]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 11:50:00 crc kubenswrapper[4766]: + source /usr/local/bin/container-scripts/functions Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNBridge=br-int Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNRemote=tcp:localhost:6642 Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNEncapType=geneve Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNAvailabilityZones= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ EnableChassisAsGateway=true Jan 29 11:50:00 crc kubenswrapper[4766]: ++ PhysicalNetworks= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNHostName= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 11:50:00 crc kubenswrapper[4766]: ++ ovs_dir=/var/lib/openvswitch Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 11:50:00 crc kubenswrapper[4766]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + cleanup_ovsdb_server_semaphore Jan 29 11:50:00 crc kubenswrapper[4766]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 11:50:00 crc kubenswrapper[4766]: > Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.787794 4766 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 11:50:00 crc kubenswrapper[4766]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 11:50:00 crc kubenswrapper[4766]: + source /usr/local/bin/container-scripts/functions Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNBridge=br-int Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNRemote=tcp:localhost:6642 Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNEncapType=geneve Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNAvailabilityZones= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ EnableChassisAsGateway=true Jan 29 11:50:00 crc kubenswrapper[4766]: ++ PhysicalNetworks= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ OVNHostName= Jan 29 11:50:00 crc kubenswrapper[4766]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 11:50:00 crc kubenswrapper[4766]: ++ ovs_dir=/var/lib/openvswitch Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 11:50:00 crc kubenswrapper[4766]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 11:50:00 crc kubenswrapper[4766]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + sleep 0.5 Jan 29 11:50:00 crc kubenswrapper[4766]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 11:50:00 crc kubenswrapper[4766]: + cleanup_ovsdb_server_semaphore Jan 29 11:50:00 crc kubenswrapper[4766]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 11:50:00 crc kubenswrapper[4766]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 11:50:00 crc kubenswrapper[4766]: > pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" containerID="cri-o://ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.787832 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" containerID="cri-o://ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" gracePeriod=28 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.800581 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="rabbitmq" containerID="cri-o://81f89abef5c9ff0ed76588cc8797d021673aa15a99156bcbfe83b47af9618c73" gracePeriod=604800 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.800825 4766 generic.go:334] "Generic (PLEG): container finished" podID="f484f11d-a20d-4d69-9619-d5f8df022bd7" containerID="175dd17b34bc65355780323019254f2f358e93d247a0f42a84345efd3579c3e2" exitCode=137 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.841650 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:50:00 crc kubenswrapper[4766]: W0129 11:50:00.844705 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa516723_105a_4ea0_98d7_317538e3d438.slice/crio-9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476 WatchSource:0}: Error finding container 9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476: Status 404 returned error can't find the container with id 9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.860062 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.861438 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-864fcd46f6-bn7r2" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api-log" containerID="cri-o://69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.861571 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-864fcd46f6-bn7r2" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api" containerID="cri-o://9982d06a3f9e319a6ac98d0397be8271cb4490d37b4f3f2be7d30bd0f946c97e" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.870561 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q9q6t"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.873999 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="2395dfbbbded053ffa0416aaf69a1b9af00ea806ccc677235dd81f9d3e9af4d0" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874033 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="f4a7df4ad8946a4ec821983033924fd3dd8e163b9568817e4bde1fb325d0beeb" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874045 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="c34078361e9f1ca8e71c227ebd7d7091b558e6c3354bb51e22b1e1374342fcd1" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874054 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="257558dd443e4fbb0f93499c81b54107c340b1424e2baeb386f3a283efa8bdc7" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874064 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="8d94f2b31596b3ca99397133e0199e33b8ac9312697c345fc4b87be8aeecd36f" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874073 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="30f33e794206b04a93fc0f4e715cfe43660a23a19676c6e5b3df502d2e869f1b" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874082 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="e0305b2958f6c65d81b49c58ff14fade2e99341839d85bcc73aa51a8cd5a3041" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874089 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="0025dd537da59d77d5c32f5643222b1c209187a4cb4389da45a65ec542521294" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874098 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="aff768bf5b19009768658ec1f0fc18767e8949cd575199e18d90c8f182040d28" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874107 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="7f8c5aeba92943edcfc2aff61715cdbbc5630ac266d0729c5b84d3f25837100d" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"2395dfbbbded053ffa0416aaf69a1b9af00ea806ccc677235dd81f9d3e9af4d0"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874194 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"f4a7df4ad8946a4ec821983033924fd3dd8e163b9568817e4bde1fb325d0beeb"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"c34078361e9f1ca8e71c227ebd7d7091b558e6c3354bb51e22b1e1374342fcd1"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"257558dd443e4fbb0f93499c81b54107c340b1424e2baeb386f3a283efa8bdc7"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874230 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"8d94f2b31596b3ca99397133e0199e33b8ac9312697c345fc4b87be8aeecd36f"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"30f33e794206b04a93fc0f4e715cfe43660a23a19676c6e5b3df502d2e869f1b"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"e0305b2958f6c65d81b49c58ff14fade2e99341839d85bcc73aa51a8cd5a3041"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874268 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"0025dd537da59d77d5c32f5643222b1c209187a4cb4389da45a65ec542521294"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874279 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"aff768bf5b19009768658ec1f0fc18767e8949cd575199e18d90c8f182040d28"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.874292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"7f8c5aeba92943edcfc2aff61715cdbbc5630ac266d0729c5b84d3f25837100d"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.883858 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c961d826-8e7c-45cf-afa0-a1712a3def4f/ovsdbserver-sb/0.log" Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.883921 4766 generic.go:334] "Generic (PLEG): container finished" podID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerID="67a037dad4e638172b7099712b789cf884049f6cc0a4510c0636f1f4a13a2e4a" exitCode=2 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.883946 4766 generic.go:334] "Generic (PLEG): container finished" podID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerID="aa69d365dd52beaeca5420f2ec0d4a643b3863f2b22c8b2c4958a5c03855b17f" exitCode=143 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.884040 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerDied","Data":"67a037dad4e638172b7099712b789cf884049f6cc0a4510c0636f1f4a13a2e4a"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.884072 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerDied","Data":"aa69d365dd52beaeca5420f2ec0d4a643b3863f2b22c8b2c4958a5c03855b17f"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.885771 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q9q6t"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.900157 4766 generic.go:334] "Generic (PLEG): container finished" podID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerID="4925aa28bf4f33e9f328ae00bd14e1da8d9f6b2c7f29cdfaaafce38d8720b42b" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.900243 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c583-account-create-update-czpwn"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.900267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" event={"ID":"d9ea6d98-59cc-4526-bf59-7328c0321f59","Type":"ContainerDied","Data":"4925aa28bf4f33e9f328ae00bd14e1da8d9f6b2c7f29cdfaaafce38d8720b42b"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.901049 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" containerID="cri-o://6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" gracePeriod=28 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.906302 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c583-account-create-update-czpwn"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.910349 4766 generic.go:334] "Generic (PLEG): container finished" podID="73cf0e15-caab-4cea-94b5-7470d635d767" containerID="36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201" exitCode=0 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.910403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c" event={"ID":"73cf0e15-caab-4cea-94b5-7470d635d767","Type":"ContainerDied","Data":"36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201"} Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.916087 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.916654 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" containerID="cri-o://ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.917175 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" containerID="cri-o://6540ff6aadfe105654848b099a8bef21fce6c3bc83bf18acea31d173e8986a0b" gracePeriod=30 Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.934304 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-c4t2j"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.947044 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-c4t2j"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.958595 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.984702 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8zbms"] Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.986482 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 11:50:00 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: if [ -n "glance" ]; then Jan 29 11:50:00 crc kubenswrapper[4766]: GRANT_DATABASE="glance" Jan 29 11:50:00 crc kubenswrapper[4766]: else Jan 29 11:50:00 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 29 11:50:00 crc kubenswrapper[4766]: fi Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 29 11:50:00 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 11:50:00 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 11:50:00 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 11:50:00 crc kubenswrapper[4766]: # support updates Jan 29 11:50:00 crc kubenswrapper[4766]: Jan 29 11:50:00 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.987887 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-4e41-account-create-update-rkj42" podUID="fa516723-105a-4ea0-98d7-317538e3d438" Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.988789 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:00 crc kubenswrapper[4766]: E0129 11:50:00.988842 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data podName:ace2f6ec-cf57-4742-82e9-e13fd230bb69 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:02.988821724 +0000 UTC m=+1740.101214735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data") pod "rabbitmq-cell1-server-0" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69") : configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:00 crc kubenswrapper[4766]: I0129 11:50:00.998260 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8zbms"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.024067 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.024442 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" containerID="cri-o://975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.038067 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31 is running failed: container process not found" containerID="8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.038525 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31 is running failed: container process not found" containerID="8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.038794 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31 is running failed: container process not found" containerID="8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.038817 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="ovsdbserver-nb" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.057924 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cv24v"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.090623 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 11:50:01 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: if [ -n "cinder" ]; then Jan 29 11:50:01 crc kubenswrapper[4766]: GRANT_DATABASE="cinder" Jan 29 11:50:01 crc kubenswrapper[4766]: else Jan 29 11:50:01 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 29 11:50:01 crc kubenswrapper[4766]: fi Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 29 11:50:01 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 11:50:01 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 11:50:01 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 11:50:01 crc kubenswrapper[4766]: # support updates Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.092743 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-2b54-account-create-update-fxkwh" podUID="1dd3143d-eabf-4163-a7cb-590dc11a2daf" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.092892 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cv24v"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.105613 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.109769 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.117360 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.117440 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.117758 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.120978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.134740 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.142611 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.144136 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.144215 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="ovn-northd" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.149662 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hdfk6"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.179661 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hdfk6"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.210554 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5zsbb"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.222064 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-862hs"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.260520 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8d49f9cb5-5nhnk" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.268008 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075438aa-afe6-4a7c-aa4a-a9b89406b170" path="/var/lib/kubelet/pods/075438aa-afe6-4a7c-aa4a-a9b89406b170/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.269650 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bc3c63-cee9-4f14-82bf-2f912e65cf14" path="/var/lib/kubelet/pods/16bc3c63-cee9-4f14-82bf-2f912e65cf14/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.274463 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20510388-23e4-4945-a5d5-db74a909518c" path="/var/lib/kubelet/pods/20510388-23e4-4945-a5d5-db74a909518c/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.275273 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a17ce00-9749-4f6d-8259-b25a78cdf8a7" path="/var/lib/kubelet/pods/5a17ce00-9749-4f6d-8259-b25a78cdf8a7/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.279582 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.279731 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.281758 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628d9a82-bc49-44b5-a259-9d7f39bcb803" path="/var/lib/kubelet/pods/628d9a82-bc49-44b5-a259-9d7f39bcb803/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.282343 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75249388-3798-4187-b09f-2e2bdfb0fd85" path="/var/lib/kubelet/pods/75249388-3798-4187-b09f-2e2bdfb0fd85/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.285317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b251ce0-eaf1-43fc-97a0-e59a8b829b28" path="/var/lib/kubelet/pods/7b251ce0-eaf1-43fc-97a0-e59a8b829b28/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.286090 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecbeee8-c61d-4b75-bc60-021d3739e386" path="/var/lib/kubelet/pods/8ecbeee8-c61d-4b75-bc60-021d3739e386/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.286713 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b251c8b1-bef8-4e31-86dd-fdfca1dc0594" path="/var/lib/kubelet/pods/b251c8b1-bef8-4e31-86dd-fdfca1dc0594/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.287891 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e7c3a2-1a70-4e43-84db-21832edfdfe1" path="/var/lib/kubelet/pods/c8e7c3a2-1a70-4e43-84db-21832edfdfe1/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.297222 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc11389e-2508-468a-b9ec-25acfbde9046" path="/var/lib/kubelet/pods/cc11389e-2508-468a-b9ec-25acfbde9046/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.298550 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b" path="/var/lib/kubelet/pods/d18ea196-39f9-4cb4-b0f3-6ac9ec23b11b/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.299023 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55d102c-fb19-40f9-be67-8234ec2232c4" path="/var/lib/kubelet/pods/d55d102c-fb19-40f9-be67-8234ec2232c4/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.299304 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.299421 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.299529 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db09eba3-8fd8-4448-8e6c-2819328ac301" path="/var/lib/kubelet/pods/db09eba3-8fd8-4448-8e6c-2819328ac301/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.301525 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.301581 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.301837 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.301859 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.302381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307467 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.303833 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run" (OuterVolumeSpecName: "var-run") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307583 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlrd\" (UniqueName: \"kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307743 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.307819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle\") pod \"73cf0e15-caab-4cea-94b5-7470d635d767\" (UID: \"73cf0e15-caab-4cea-94b5-7470d635d767\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.308657 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.308693 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.309669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts" (OuterVolumeSpecName: "scripts") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.311153 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5484d77-284b-4422-aa8a-c44761f4c8e9" path="/var/lib/kubelet/pods/e5484d77-284b-4422-aa8a-c44761f4c8e9/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.311698 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb7e1b9-9706-4112-88e1-6bd624f14680" path="/var/lib/kubelet/pods/feb7e1b9-9706-4112-88e1-6bd624f14680/volumes" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.316162 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.316795 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73cf0e15-caab-4cea-94b5-7470d635d767-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.316964 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.317110 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73cf0e15-caab-4cea-94b5-7470d635d767-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.318855 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 11:50:01 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: if [ -n "placement" ]; then Jan 29 11:50:01 crc kubenswrapper[4766]: GRANT_DATABASE="placement" Jan 29 11:50:01 crc kubenswrapper[4766]: else Jan 29 11:50:01 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 29 11:50:01 crc kubenswrapper[4766]: fi Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 29 11:50:01 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 11:50:01 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 11:50:01 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 11:50:01 crc kubenswrapper[4766]: # support updates Jan 29 11:50:01 crc kubenswrapper[4766]: Jan 29 11:50:01 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.321353 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-fb14-account-create-update-5hrp9" podUID="ee945927-3683-4163-ac37-83d894a9569b" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.323285 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd" (OuterVolumeSpecName: "kube-api-access-lwlrd") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "kube-api-access-lwlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.344027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: E0129 11:50:01.402182 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34a8c513_ef7f_49ce_a0d8_2d9351abca2a.slice/crio-ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda084e5b1_d167_4678_8ab9_af72fb1d07fd.slice/crio-69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd5d6aa7_be8d_4439_a4d3_70272705cc2f.slice/crio-conmon-d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda084e5b1_d167_4678_8ab9_af72fb1d07fd.slice/crio-conmon-69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc299dfaa_12db_4482_ab89_55ba85b8e2a7.slice/crio-conmon-0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34a8c513_ef7f_49ce_a0d8_2d9351abca2a.slice/crio-conmon-ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc299dfaa_12db_4482_ab89_55ba85b8e2a7.slice/crio-0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.420875 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlrd\" (UniqueName: \"kubernetes.io/projected/73cf0e15-caab-4cea-94b5-7470d635d767-kube-api-access-lwlrd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.420912 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.433759 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5zsbb"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434056 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-862hs"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434071 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434087 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434099 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434110 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vc9jp"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434119 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9kz8m"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434130 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9kz8m"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434142 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vc9jp"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434153 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434166 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.434548 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="982e76a1-f77f-4569-bb8e-f524dba573ca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435185 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435236 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435255 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-pmh5k"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435276 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435298 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435317 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerName="nova-cell1-conductor-conductor" containerID="cri-o://004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435627 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-667bcbf4cf-kw66x" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-httpd" containerID="cri-o://b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.435685 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-667bcbf4cf-kw66x" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-server" containerID="cri-o://02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.467694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_51f2b06e-748d-4bb1-b7e7-f5cd039a532d/ovsdbserver-nb/0.log" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.467768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.493280 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c961d826-8e7c-45cf-afa0-a1712a3def4f/ovsdbserver-sb/0.log" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.493392 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.521603 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="rabbitmq" containerID="cri-o://07c7e43f4c233bc15a95251cad07a884a33a05f78743bb5a3c6f01f63b880784" gracePeriod=604800 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.556481 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.578978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.611502 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "73cf0e15-caab-4cea-94b5-7470d635d767" (UID: "73cf0e15-caab-4cea-94b5-7470d635d767"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.623828 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.623900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.623918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mb62\" (UniqueName: \"kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.623941 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.623982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624073 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624144 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624201 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c961d826-8e7c-45cf-afa0-a1712a3def4f\" (UID: \"c961d826-8e7c-45cf-afa0-a1712a3def4f\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624285 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624350 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs9v\" (UniqueName: \"kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v\") pod \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\" (UID: \"51f2b06e-748d-4bb1-b7e7-f5cd039a532d\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.624826 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cf0e15-caab-4cea-94b5-7470d635d767-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.625084 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config" (OuterVolumeSpecName: "config") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.625540 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config" (OuterVolumeSpecName: "config") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.626070 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.626579 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts" (OuterVolumeSpecName: "scripts") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.632653 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts" (OuterVolumeSpecName: "scripts") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.642993 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v" (OuterVolumeSpecName: "kube-api-access-wjs9v") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "kube-api-access-wjs9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.643120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62" (OuterVolumeSpecName: "kube-api-access-6mb62") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "kube-api-access-6mb62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.643554 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.655193 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="galera" containerID="cri-o://4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" gracePeriod=30 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.666852 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.709564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9wjv\" (UniqueName: \"kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv\") pod \"f484f11d-a20d-4d69-9619-d5f8df022bd7\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725706 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config\") pod \"f484f11d-a20d-4d69-9619-d5f8df022bd7\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725763 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpm9s\" (UniqueName: \"kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret\") pod \"f484f11d-a20d-4d69-9619-d5f8df022bd7\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725893 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725919 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle\") pod \"f484f11d-a20d-4d69-9619-d5f8df022bd7\" (UID: \"f484f11d-a20d-4d69-9619-d5f8df022bd7\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.725998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config\") pod \"d9ea6d98-59cc-4526-bf59-7328c0321f59\" (UID: \"d9ea6d98-59cc-4526-bf59-7328c0321f59\") " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726667 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726688 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726703 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjs9v\" (UniqueName: \"kubernetes.io/projected/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-kube-api-access-wjs9v\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726714 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mb62\" (UniqueName: \"kubernetes.io/projected/c961d826-8e7c-45cf-afa0-a1712a3def4f-kube-api-access-6mb62\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726727 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726738 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726749 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c961d826-8e7c-45cf-afa0-a1712a3def4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726759 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726776 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.726788 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.742737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv" (OuterVolumeSpecName: "kube-api-access-p9wjv") pod "f484f11d-a20d-4d69-9619-d5f8df022bd7" (UID: "f484f11d-a20d-4d69-9619-d5f8df022bd7"). InnerVolumeSpecName "kube-api-access-p9wjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.742797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.781591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.803985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s" (OuterVolumeSpecName: "kube-api-access-kpm9s") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "kube-api-access-kpm9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.812813 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.829586 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.830957 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.830985 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.830999 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9wjv\" (UniqueName: \"kubernetes.io/projected/f484f11d-a20d-4d69-9619-d5f8df022bd7-kube-api-access-p9wjv\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.831010 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpm9s\" (UniqueName: \"kubernetes.io/projected/d9ea6d98-59cc-4526-bf59-7328c0321f59-kube-api-access-kpm9s\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.831024 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.979810 4766 generic.go:334] "Generic (PLEG): container finished" podID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerID="679c7206ac2f82b82e8b1a3ca3a64bf5f1d0710a5dba85f183e20c4390695423" exitCode=143 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.979903 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerDied","Data":"679c7206ac2f82b82e8b1a3ca3a64bf5f1d0710a5dba85f183e20c4390695423"} Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.986870 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f484f11d-a20d-4d69-9619-d5f8df022bd7" (UID: "f484f11d-a20d-4d69-9619-d5f8df022bd7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992137 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="b5a310208e51de3a1f1085a299d696e0c092c1ac6a305a7368d95a466bfff254" exitCode=0 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992170 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="d7be2c0fabfadf12060358b5738adc72343b29f57c77135d1af1a5ae1e4e2863" exitCode=0 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992181 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="7c33d37f74f55ffa51cd765a4b94d2af021150d55ef7e15a523b325c621e7d0a" exitCode=0 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992189 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6" exitCode=0 Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992242 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"b5a310208e51de3a1f1085a299d696e0c092c1ac6a305a7368d95a466bfff254"} Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"d7be2c0fabfadf12060358b5738adc72343b29f57c77135d1af1a5ae1e4e2863"} Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992364 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"7c33d37f74f55ffa51cd765a4b94d2af021150d55ef7e15a523b325c621e7d0a"} Jan 29 11:50:01 crc kubenswrapper[4766]: I0129 11:50:01.992374 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.015488 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.031114 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f484f11d-a20d-4d69-9619-d5f8df022bd7" (UID: "f484f11d-a20d-4d69-9619-d5f8df022bd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.031349 4766 generic.go:334] "Generic (PLEG): container finished" podID="be830961-a6c3-4340-a134-ea20de96b31b" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" exitCode=0 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.031649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerDied","Data":"ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.035223 4766 generic.go:334] "Generic (PLEG): container finished" podID="15805cd2-3301-4e59-8c66-adde53408809" containerID="d3a5a4ab1f26a3b0ec0c993790441804f0c92c85eb73ffb26bede23ff956c81f" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.035384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerDied","Data":"d3a5a4ab1f26a3b0ec0c993790441804f0c92c85eb73ffb26bede23ff956c81f"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.039363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kz4c" event={"ID":"73cf0e15-caab-4cea-94b5-7470d635d767","Type":"ContainerDied","Data":"c28c597ce46a345605c7d91b21af94df40025b19ddd33835eea147c3d4543a81"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.039458 4766 scope.go:117] "RemoveContainer" containerID="36c3a3ca13981a2584b6cdf28bc3d3bcfe78cd3a54aa84f86d93f915ae3c8201" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.039453 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kz4c" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.065564 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_51f2b06e-748d-4bb1-b7e7-f5cd039a532d/ovsdbserver-nb/0.log" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.065962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"51f2b06e-748d-4bb1-b7e7-f5cd039a532d","Type":"ContainerDied","Data":"4b038c07875536956bb8fa9ef2ba60d199c6693186e25830439d36a2a72eac99"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.066117 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.069361 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.069388 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.069399 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.070114 4766 generic.go:334] "Generic (PLEG): container finished" podID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerID="b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f" exitCode=0 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.070212 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerDied","Data":"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.124218 4766 scope.go:117] "RemoveContainer" containerID="244347370c5e70ae119750458317e88ba78b6c9c01068f9f4942f415f38e3b6c" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.124467 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.133825 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerID="d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496" exitCode=0 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.133923 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerDied","Data":"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.145358 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5kz4c"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.148511 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-rkj42" event={"ID":"fa516723-105a-4ea0-98d7-317538e3d438","Type":"ContainerStarted","Data":"9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.150556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb14-account-create-update-5hrp9" event={"ID":"ee945927-3683-4163-ac37-83d894a9569b","Type":"ContainerStarted","Data":"2cd775c9a342e45a9eaab28b98ce5cc214e4598074cb2861230e916c669915c4"} Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.153526 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 11:50:02 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: if [ -n "placement" ]; then Jan 29 11:50:02 crc kubenswrapper[4766]: GRANT_DATABASE="placement" Jan 29 11:50:02 crc kubenswrapper[4766]: else Jan 29 11:50:02 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 29 11:50:02 crc kubenswrapper[4766]: fi Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 29 11:50:02 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 11:50:02 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 11:50:02 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 11:50:02 crc kubenswrapper[4766]: # support updates Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.154921 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-fb14-account-create-update-5hrp9" podUID="ee945927-3683-4163-ac37-83d894a9569b" Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.157172 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 11:50:02 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: if [ -n "glance" ]; then Jan 29 11:50:02 crc kubenswrapper[4766]: GRANT_DATABASE="glance" Jan 29 11:50:02 crc kubenswrapper[4766]: else Jan 29 11:50:02 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 29 11:50:02 crc kubenswrapper[4766]: fi Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 29 11:50:02 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 11:50:02 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 11:50:02 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 11:50:02 crc kubenswrapper[4766]: # support updates Jan 29 11:50:02 crc kubenswrapper[4766]: Jan 29 11:50:02 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.179967 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-4e41-account-create-update-rkj42" podUID="fa516723-105a-4ea0-98d7-317538e3d438" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.198653 4766 scope.go:117] "RemoveContainer" containerID="8f06668fc700d7443d44bf9dde78d755fef11d0abb78bc9734f13c5f3c751e31" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.198667 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.205767 4766 generic.go:334] "Generic (PLEG): container finished" podID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerID="d679005363f3d7068f40510c318a9543fe5b63ce1d7e7cc636a9215f217d7925" exitCode=1 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.205843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerDied","Data":"d679005363f3d7068f40510c318a9543fe5b63ce1d7e7cc636a9215f217d7925"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.205870 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerStarted","Data":"f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.206696 4766 scope.go:117] "RemoveContainer" containerID="f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de" Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.206929 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-lxhcz_openstack(6739e909-eb6b-4578-8436-fa9f24385e0a)\"" pod="openstack/root-account-create-update-lxhcz" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.218787 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.225659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-fxkwh" event={"ID":"1dd3143d-eabf-4163-a7cb-590dc11a2daf","Type":"ContainerStarted","Data":"d20b357216e2955d82e901a1ff9a90d20aacd06c02efd2b3312fd2f894c35fbd"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.241706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.245543 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c961d826-8e7c-45cf-afa0-a1712a3def4f/ovsdbserver-sb/0.log" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.245791 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.246147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c961d826-8e7c-45cf-afa0-a1712a3def4f","Type":"ContainerDied","Data":"5b8fb0e84bb620bce5c18a5dcbdf70d19eb3cff887eacb2d543e27f4e9dc6f9f"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.247187 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config" (OuterVolumeSpecName: "config") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.271831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" event={"ID":"d9ea6d98-59cc-4526-bf59-7328c0321f59","Type":"ContainerDied","Data":"a5fbf32070653413c5e083f83b6588d585cc0b28f1da2b460f5cf63726690a93"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.272129 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.293090 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.293123 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.293135 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.293146 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.294584 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerID="5f484b8e00e79b044b603b23bc146e1024f8a58609cafd703ef2e0617e674445" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.294643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerDied","Data":"5f484b8e00e79b044b603b23bc146e1024f8a58609cafd703ef2e0617e674445"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.322526 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.324317 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "51f2b06e-748d-4bb1-b7e7-f5cd039a532d" (UID: "51f2b06e-748d-4bb1-b7e7-f5cd039a532d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.325640 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.325763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.329550 4766 generic.go:334] "Generic (PLEG): container finished" podID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerID="0a988c9f46e3a70b4049e9abe888a41821aad0a9143a7ab9d80be40f836fe69e" exitCode=0 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.329814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerDied","Data":"0a988c9f46e3a70b4049e9abe888a41821aad0a9143a7ab9d80be40f836fe69e"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.331304 4766 generic.go:334] "Generic (PLEG): container finished" podID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerID="5eccd5688bd3f21bd7b9ab6b4fa9bc25010dd2b9cb4c6d665db537e3ffb66b72" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.331348 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerDied","Data":"5eccd5688bd3f21bd7b9ab6b4fa9bc25010dd2b9cb4c6d665db537e3ffb66b72"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.334396 4766 generic.go:334] "Generic (PLEG): container finished" podID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerID="ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.334464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerDied","Data":"ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.345404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f484f11d-a20d-4d69-9619-d5f8df022bd7" (UID: "f484f11d-a20d-4d69-9619-d5f8df022bd7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.347095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9ea6d98-59cc-4526-bf59-7328c0321f59" (UID: "d9ea6d98-59cc-4526-bf59-7328c0321f59"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.349019 4766 generic.go:334] "Generic (PLEG): container finished" podID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerID="5fa3e2236ec63b27db194527bb716839b21f9cea6f579d3762f4f41dced8ddd1" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.349533 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerDied","Data":"5fa3e2236ec63b27db194527bb716839b21f9cea6f579d3762f4f41dced8ddd1"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.356589 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c961d826-8e7c-45cf-afa0-a1712a3def4f" (UID: "c961d826-8e7c-45cf-afa0-a1712a3def4f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.358989 4766 generic.go:334] "Generic (PLEG): container finished" podID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerID="8b8626b814bdc9ebbe0eb6d6c45744653225b6c9c53cd0a3325216664d30e4d6" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.359049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerDied","Data":"8b8626b814bdc9ebbe0eb6d6c45744653225b6c9c53cd0a3325216664d30e4d6"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.361062 4766 generic.go:334] "Generic (PLEG): container finished" podID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerID="69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb" exitCode=143 Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.361091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerDied","Data":"69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb"} Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397033 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f484f11d-a20d-4d69-9619-d5f8df022bd7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397056 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397065 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397074 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ea6d98-59cc-4526-bf59-7328c0321f59-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397083 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c961d826-8e7c-45cf-afa0-a1712a3def4f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.397092 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f2b06e-748d-4bb1-b7e7-f5cd039a532d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.397150 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 11:50:02 crc kubenswrapper[4766]: E0129 11:50:02.397196 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data podName:b77b577e-b980-46fb-945a-a0b57e3bdc17 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:06.397180752 +0000 UTC m=+1743.509573763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data") pod "rabbitmq-server-0" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17") : configmap "rabbitmq-config-data" not found Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.626281 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.628060 4766 scope.go:117] "RemoveContainer" containerID="d679005363f3d7068f40510c318a9543fe5b63ce1d7e7cc636a9215f217d7925" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.641535 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.652773 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.698026 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.716557 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.722947 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-hznsj"] Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.758620 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.910020 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts\") pod \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.910165 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7mr\" (UniqueName: \"kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr\") pod \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\" (UID: \"1dd3143d-eabf-4163-a7cb-590dc11a2daf\") " Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.910649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dd3143d-eabf-4163-a7cb-590dc11a2daf" (UID: "1dd3143d-eabf-4163-a7cb-590dc11a2daf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.910899 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd3143d-eabf-4163-a7cb-590dc11a2daf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.912429 4766 scope.go:117] "RemoveContainer" containerID="67a037dad4e638172b7099712b789cf884049f6cc0a4510c0636f1f4a13a2e4a" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.917593 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr" (OuterVolumeSpecName: "kube-api-access-sc7mr") pod "1dd3143d-eabf-4163-a7cb-590dc11a2daf" (UID: "1dd3143d-eabf-4163-a7cb-590dc11a2daf"). InnerVolumeSpecName "kube-api-access-sc7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:02 crc kubenswrapper[4766]: I0129 11:50:02.954298 4766 scope.go:117] "RemoveContainer" containerID="aa69d365dd52beaeca5420f2ec0d4a643b3863f2b22c8b2c4958a5c03855b17f" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.005318 4766 scope.go:117] "RemoveContainer" containerID="4925aa28bf4f33e9f328ae00bd14e1da8d9f6b2c7f29cdfaaafce38d8720b42b" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.027872 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7mr\" (UniqueName: \"kubernetes.io/projected/1dd3143d-eabf-4163-a7cb-590dc11a2daf-kube-api-access-sc7mr\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.027947 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.028010 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data podName:ace2f6ec-cf57-4742-82e9-e13fd230bb69 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:07.027989744 +0000 UTC m=+1744.140382755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data") pod "rabbitmq-cell1-server-0" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69") : configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.072716 4766 scope.go:117] "RemoveContainer" containerID="579b365946cd3511a4044d9d12ae721717e6d8a5f9a3f4c3c1ce4d75f48b8a40" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.131974 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.209768 4766 scope.go:117] "RemoveContainer" containerID="175dd17b34bc65355780323019254f2f358e93d247a0f42a84345efd3579c3e2" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.228843 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233058 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data\") pod \"982e76a1-f77f-4569-bb8e-f524dba573ca\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233113 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle\") pod \"982e76a1-f77f-4569-bb8e-f524dba573ca\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7rjc\" (UniqueName: \"kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc\") pod \"982e76a1-f77f-4569-bb8e-f524dba573ca\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233429 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5znsl\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233535 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs\") pod \"982e76a1-f77f-4569-bb8e-f524dba573ca\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233556 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs\") pod \"982e76a1-f77f-4569-bb8e-f524dba573ca\" (UID: \"982e76a1-f77f-4569-bb8e-f524dba573ca\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.233626 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd\") pod \"55700325-5d09-47fc-adad-06c1a8fbbee4\" (UID: \"55700325-5d09-47fc-adad-06c1a8fbbee4\") " Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.234072 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.234390 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.235187 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.265733 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e" path="/var/lib/kubelet/pods/1ea5bd4c-3f4e-4202-95d0-a9b498cb2a5e/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.266361 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f68f3b0-f008-4a27-a250-9efe5bdf5fa0" path="/var/lib/kubelet/pods/2f68f3b0-f008-4a27-a250-9efe5bdf5fa0/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.267729 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" path="/var/lib/kubelet/pods/51f2b06e-748d-4bb1-b7e7-f5cd039a532d/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.268978 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" path="/var/lib/kubelet/pods/73cf0e15-caab-4cea-94b5-7470d635d767/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.269726 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.269816 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl" (OuterVolumeSpecName: "kube-api-access-5znsl") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "kube-api-access-5znsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.269866 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" path="/var/lib/kubelet/pods/c961d826-8e7c-45cf-afa0-a1712a3def4f/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.270500 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae3e2e2-58b5-4a7a-ae77-7712d85990ea" path="/var/lib/kubelet/pods/cae3e2e2-58b5-4a7a-ae77-7712d85990ea/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.271647 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce78868c-e61f-4d87-9e79-27b29a75644d" path="/var/lib/kubelet/pods/ce78868c-e61f-4d87-9e79-27b29a75644d/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.272207 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" path="/var/lib/kubelet/pods/d9ea6d98-59cc-4526-bf59-7328c0321f59/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.272998 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defb6fef-3db5-4137-a250-9e20054fe48a" path="/var/lib/kubelet/pods/defb6fef-3db5-4137-a250-9e20054fe48a/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.274276 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f484f11d-a20d-4d69-9619-d5f8df022bd7" path="/var/lib/kubelet/pods/f484f11d-a20d-4d69-9619-d5f8df022bd7/volumes" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.286641 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc" (OuterVolumeSpecName: "kube-api-access-b7rjc") pod "982e76a1-f77f-4569-bb8e-f524dba573ca" (UID: "982e76a1-f77f-4569-bb8e-f524dba573ca"). InnerVolumeSpecName "kube-api-access-b7rjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.340679 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.348183 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55700325-5d09-47fc-adad-06c1a8fbbee4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.348783 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7rjc\" (UniqueName: \"kubernetes.io/projected/982e76a1-f77f-4569-bb8e-f524dba573ca-kube-api-access-b7rjc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.349185 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.349209 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5znsl\" (UniqueName: \"kubernetes.io/projected/55700325-5d09-47fc-adad-06c1a8fbbee4-kube-api-access-5znsl\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.373775 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "982e76a1-f77f-4569-bb8e-f524dba573ca" (UID: "982e76a1-f77f-4569-bb8e-f524dba573ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.374800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "982e76a1-f77f-4569-bb8e-f524dba573ca" (UID: "982e76a1-f77f-4569-bb8e-f524dba573ca"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.383556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "982e76a1-f77f-4569-bb8e-f524dba573ca" (UID: "982e76a1-f77f-4569-bb8e-f524dba573ca"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.402347 4766 generic.go:334] "Generic (PLEG): container finished" podID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerID="02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f" exitCode=0 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.402860 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-667bcbf4cf-kw66x" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.410899 4766 generic.go:334] "Generic (PLEG): container finished" podID="982e76a1-f77f-4569-bb8e-f524dba573ca" containerID="be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa" exitCode=0 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.411027 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.416330 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.416700 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b54-account-create-update-fxkwh" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.451747 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.452153 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.452335 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.452356 4766 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.452371 4766 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.454178 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data" (OuterVolumeSpecName: "config-data") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.468362 4766 generic.go:334] "Generic (PLEG): container finished" podID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerID="f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de" exitCode=1 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.469736 4766 scope.go:117] "RemoveContainer" containerID="f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de" Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.470222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-lxhcz_openstack(6739e909-eb6b-4578-8436-fa9f24385e0a)\"" pod="openstack/root-account-create-update-lxhcz" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.471152 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55700325-5d09-47fc-adad-06c1a8fbbee4" (UID: "55700325-5d09-47fc-adad-06c1a8fbbee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.472975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerDied","Data":"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.472998 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-667bcbf4cf-kw66x" event={"ID":"55700325-5d09-47fc-adad-06c1a8fbbee4","Type":"ContainerDied","Data":"a235dfc083948f94bf2c40bf7cd1dc38db67fc96f1761fb59b0c274560a45e5c"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.473009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"982e76a1-f77f-4569-bb8e-f524dba573ca","Type":"ContainerDied","Data":"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.473021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"982e76a1-f77f-4569-bb8e-f524dba573ca","Type":"ContainerDied","Data":"ffdaeee44f7feedb9a3d5ecebc0a1ea10d4686290cc6e4d813b51e7e45afe566"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.473029 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b54-account-create-update-fxkwh" event={"ID":"1dd3143d-eabf-4163-a7cb-590dc11a2daf","Type":"ContainerDied","Data":"d20b357216e2955d82e901a1ff9a90d20aacd06c02efd2b3312fd2f894c35fbd"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.473042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerDied","Data":"f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de"} Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.473066 4766 scope.go:117] "RemoveContainer" containerID="02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.484008 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data" (OuterVolumeSpecName: "config-data") pod "982e76a1-f77f-4569-bb8e-f524dba573ca" (UID: "982e76a1-f77f-4569-bb8e-f524dba573ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.554495 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.554526 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982e76a1-f77f-4569-bb8e-f524dba573ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.554535 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55700325-5d09-47fc-adad-06c1a8fbbee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.559473 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.567057 4766 scope.go:117] "RemoveContainer" containerID="b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.574466 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2b54-account-create-update-fxkwh"] Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.647246 4766 scope.go:117] "RemoveContainer" containerID="02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f" Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.647903 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f\": container with ID starting with 02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f not found: ID does not exist" containerID="02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.647954 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f"} err="failed to get container status \"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f\": rpc error: code = NotFound desc = could not find container \"02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f\": container with ID starting with 02a9693eb69c4db962a68ada03e1079457c0c7d3b72c123f0dcacc6c9a65052f not found: ID does not exist" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.647981 4766 scope.go:117] "RemoveContainer" containerID="b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f" Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.649993 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f\": container with ID starting with b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f not found: ID does not exist" containerID="b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.650016 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f"} err="failed to get container status \"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f\": rpc error: code = NotFound desc = could not find container \"b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f\": container with ID starting with b0a6a18a240b7164e62a8b48d6bc1c1984abcb6427f837cc8857a8516b49b51f not found: ID does not exist" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.650037 4766 scope.go:117] "RemoveContainer" containerID="be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.675260 4766 scope.go:117] "RemoveContainer" containerID="be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa" Jan 29 11:50:03 crc kubenswrapper[4766]: E0129 11:50:03.676577 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa\": container with ID starting with be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa not found: ID does not exist" containerID="be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.676617 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa"} err="failed to get container status \"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa\": rpc error: code = NotFound desc = could not find container \"be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa\": container with ID starting with be80252d4149682a15ba44a10bb7c78f7b0e2a78056ea82dcca8b68ed0a66ffa not found: ID does not exist" Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.757450 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.770520 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-central-agent" containerID="cri-o://7d416bce038e327e0ac5e80d025af4b30f549a9927b2ce75a0d83f38a53e6163" gracePeriod=30 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.770925 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="proxy-httpd" containerID="cri-o://2f0fdc25b25c46bbd38ca0f02d558f7c1d71098932ed72e4a7f35d5b8f371421" gracePeriod=30 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.770933 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-notification-agent" containerID="cri-o://52b9aceb7fcf91e3fea6020d24cd2f5e816f8e95a93472d9f4a950055b986415" gracePeriod=30 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.771009 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="sg-core" containerID="cri-o://10926e24d0436cca11a58b6675241744a47408f25fc95907f65bdb78e9c1e372" gracePeriod=30 Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.821542 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:50:03 crc kubenswrapper[4766]: I0129 11:50:03.824116 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" containerName="kube-state-metrics" containerID="cri-o://98b39f027e94d9b7e2c9e0f75cbec74515a6877539cbea210a05a9de92134411" gracePeriod=30 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:03.926074 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d3fe-account-create-update-zjmd9"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:03.938200 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:03.938788 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerName="memcached" containerID="cri-o://f90c3671694662e6b9f1584abc9bd6ae5dd46f25e77b8df0cd377c69033dc174" gracePeriod=30 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:03.956566 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d3fe-account-create-update-zjmd9"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.013556 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d3fe-account-create-update-bpt5v"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014069 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014088 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014103 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-server" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014111 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-server" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014122 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defb6fef-3db5-4137-a250-9e20054fe48a" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014130 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="defb6fef-3db5-4137-a250-9e20054fe48a" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014149 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014158 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014166 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="init" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014173 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="init" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014188 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014195 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014213 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982e76a1-f77f-4569-bb8e-f524dba573ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014221 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="982e76a1-f77f-4569-bb8e-f524dba573ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014243 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-httpd" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014251 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-httpd" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="dnsmasq-dns" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014268 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="dnsmasq-dns" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014283 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="ovsdbserver-sb" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014291 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="ovsdbserver-sb" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.014307 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="ovsdbserver-nb" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014314 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="ovsdbserver-nb" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014543 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-httpd" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014562 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="ovsdbserver-sb" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014572 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" containerName="proxy-server" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014587 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="982e76a1-f77f-4569-bb8e-f524dba573ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014596 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="ovsdbserver-nb" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014609 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="dnsmasq-dns" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014622 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f2b06e-748d-4bb1-b7e7-f5cd039a532d" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014635 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cf0e15-caab-4cea-94b5-7470d635d767" containerName="ovn-controller" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014653 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c961d826-8e7c-45cf-afa0-a1712a3def4f" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.014668 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="defb6fef-3db5-4137-a250-9e20054fe48a" containerName="openstack-network-exporter" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.043900 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.046669 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.077155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d3fe-account-create-update-bpt5v"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.115749 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pvvng"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.123489 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pvvng"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.131436 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-czwdh"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.144469 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.144691 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6757d49457-dctc6" podUID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" containerName="keystone-api" containerID="cri-o://9a99c0592d77644bf5b6f77afc5cf7aaa5c3a2e758cf41c91b1d8d6f29b64745" gracePeriod=30 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.161598 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-czwdh"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.169839 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.181536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdtz\" (UniqueName: \"kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.181642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.199491 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:44542->10.217.0.170:9292: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.199727 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:44546->10.217.0.170:9292: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.208319 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9292/healthcheck\": read tcp 10.217.0.2:41020->10.217.0.171:9292: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.208507 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.171:9292/healthcheck\": read tcp 10.217.0.2:41018->10.217.0.171:9292: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.209594 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d3fe-account-create-update-bpt5v"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.227699 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-864fcd46f6-bn7r2" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:40392->10.217.0.151:9311: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.228026 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-864fcd46f6-bn7r2" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:40398->10.217.0.151:9311: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.239130 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hlq6m"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.271562 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hlq6m"] Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.283154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdtz\" (UniqueName: \"kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.283299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.283944 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.284018 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:04.783998571 +0000 UTC m=+1741.896391592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.286146 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.290069 4766 projected.go:194] Error preparing data for projected volume kube-api-access-4tdtz for pod openstack/keystone-d3fe-account-create-update-bpt5v: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.290554 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:04.790180954 +0000 UTC m=+1741.902573995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4tdtz" (UniqueName: "kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.343366 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58898->10.217.0.197:8775: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.343780 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58884->10.217.0.197:8775: read: connection reset by peer" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.499246 4766 generic.go:334] "Generic (PLEG): container finished" podID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerID="6b47eab1e9e54a967ffb6a8dbb5d22f27c753e7cad3329b3e2436f5c3898c7c9" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.499575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerDied","Data":"6b47eab1e9e54a967ffb6a8dbb5d22f27c753e7cad3329b3e2436f5c3898c7c9"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.501968 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0c26286-7e5f-4610-967b-408ad3916918" containerID="952afb9816e99acbe37c8a9ddc03d82aee8becf7ea80015a22c126ca32f58ff9" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.502022 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerDied","Data":"952afb9816e99acbe37c8a9ddc03d82aee8becf7ea80015a22c126ca32f58ff9"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.504613 4766 generic.go:334] "Generic (PLEG): container finished" podID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerID="6540ff6aadfe105654848b099a8bef21fce6c3bc83bf18acea31d173e8986a0b" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.504659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerDied","Data":"6540ff6aadfe105654848b099a8bef21fce6c3bc83bf18acea31d173e8986a0b"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.505706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e41-account-create-update-rkj42" event={"ID":"fa516723-105a-4ea0-98d7-317538e3d438","Type":"ContainerDied","Data":"9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.505728 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c7fa56cadd5d905af55463952ceb6834351acb719a90655e7a4fb7e1bbc2476" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.506964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb14-account-create-update-5hrp9" event={"ID":"ee945927-3683-4163-ac37-83d894a9569b","Type":"ContainerDied","Data":"2cd775c9a342e45a9eaab28b98ce5cc214e4598074cb2861230e916c669915c4"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.506985 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd775c9a342e45a9eaab28b98ce5cc214e4598074cb2861230e916c669915c4" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.512561 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerID="a5870626b08c5ff65aad3d62a1002578aa41b4503406b749e77a94df8bdaa959" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.512637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerDied","Data":"a5870626b08c5ff65aad3d62a1002578aa41b4503406b749e77a94df8bdaa959"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536152 4766 generic.go:334] "Generic (PLEG): container finished" podID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerID="2f0fdc25b25c46bbd38ca0f02d558f7c1d71098932ed72e4a7f35d5b8f371421" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536184 4766 generic.go:334] "Generic (PLEG): container finished" podID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerID="10926e24d0436cca11a58b6675241744a47408f25fc95907f65bdb78e9c1e372" exitCode=2 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536191 4766 generic.go:334] "Generic (PLEG): container finished" podID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerID="7d416bce038e327e0ac5e80d025af4b30f549a9927b2ce75a0d83f38a53e6163" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerDied","Data":"2f0fdc25b25c46bbd38ca0f02d558f7c1d71098932ed72e4a7f35d5b8f371421"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerDied","Data":"10926e24d0436cca11a58b6675241744a47408f25fc95907f65bdb78e9c1e372"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.536341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerDied","Data":"7d416bce038e327e0ac5e80d025af4b30f549a9927b2ce75a0d83f38a53e6163"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.553151 4766 generic.go:334] "Generic (PLEG): container finished" podID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" containerID="98b39f027e94d9b7e2c9e0f75cbec74515a6877539cbea210a05a9de92134411" exitCode=2 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.553221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ed02fac-f569-47e7-a243-6d0e37dc6c05","Type":"ContainerDied","Data":"98b39f027e94d9b7e2c9e0f75cbec74515a6877539cbea210a05a9de92134411"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.555970 4766 generic.go:334] "Generic (PLEG): container finished" podID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerID="7757bdf84a1a20ce16552c3e15762e105f6f1602c859ce9e79be4ff4bbd3a36d" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.556020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerDied","Data":"7757bdf84a1a20ce16552c3e15762e105f6f1602c859ce9e79be4ff4bbd3a36d"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.557832 4766 generic.go:334] "Generic (PLEG): container finished" podID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerID="9982d06a3f9e319a6ac98d0397be8271cb4490d37b4f3f2be7d30bd0f946c97e" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.557882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerDied","Data":"9982d06a3f9e319a6ac98d0397be8271cb4490d37b4f3f2be7d30bd0f946c97e"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.559618 4766 generic.go:334] "Generic (PLEG): container finished" podID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerID="b58d99a157c04c8d7ed139d5f25b0d83bc497ae3ae246723524e597810fc69f5" exitCode=0 Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.560105 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-lxhcz" secret="" err="secret \"galera-openstack-dockercfg-5cjjw\" not found" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.560134 4766 scope.go:117] "RemoveContainer" containerID="f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.560305 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-lxhcz_openstack(6739e909-eb6b-4578-8436-fa9f24385e0a)\"" pod="openstack/root-account-create-update-lxhcz" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.560529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerDied","Data":"b58d99a157c04c8d7ed139d5f25b0d83bc497ae3ae246723524e597810fc69f5"} Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.622054 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="galera" containerID="cri-o://924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" gracePeriod=30 Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.661565 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856 is running failed: container process not found" containerID="5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.662165 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856 is running failed: container process not found" containerID="5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.665564 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856 is running failed: container process not found" containerID="5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.665612 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerName="nova-scheduler-scheduler" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.700434 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.700484 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts podName:6739e909-eb6b-4578-8436-fa9f24385e0a nodeName:}" failed. No retries permitted until 2026-01-29 11:50:05.200470958 +0000 UTC m=+1742.312863969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts") pod "root-account-create-update-lxhcz" (UID: "6739e909-eb6b-4578-8436-fa9f24385e0a") : configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.703106 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.705422 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.707845 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.707900 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerName="nova-cell1-conductor-conductor" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.768089 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.769429 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.771581 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.771611 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="galera" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.802114 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdtz\" (UniqueName: \"kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.802231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.802389 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.802448 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:05.802433703 +0000 UTC m=+1742.914826714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : configmap "openstack-scripts" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.805814 4766 projected.go:194] Error preparing data for projected volume kube-api-access-4tdtz for pod openstack/keystone-d3fe-account-create-update-bpt5v: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:04 crc kubenswrapper[4766]: E0129 11:50:04.805894 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:05.805872244 +0000 UTC m=+1742.918265325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4tdtz" (UniqueName: "kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:04 crc kubenswrapper[4766]: I0129 11:50:04.972083 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.99:11211: connect: connection refused" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.038563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.084095 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.098316 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-667bcbf4cf-kw66x"] Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.098383 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.106134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts\") pod \"ee945927-3683-4163-ac37-83d894a9569b\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.106191 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsd2g\" (UniqueName: \"kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g\") pod \"ee945927-3683-4163-ac37-83d894a9569b\" (UID: \"ee945927-3683-4163-ac37-83d894a9569b\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.107562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee945927-3683-4163-ac37-83d894a9569b" (UID: "ee945927-3683-4163-ac37-83d894a9569b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.108147 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.119761 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g" (OuterVolumeSpecName: "kube-api-access-tsd2g") pod "ee945927-3683-4163-ac37-83d894a9569b" (UID: "ee945927-3683-4163-ac37-83d894a9569b"). InnerVolumeSpecName "kube-api-access-tsd2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.126669 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": dial tcp 10.217.0.195:3000: connect: connection refused" Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.176679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4tdtz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-d3fe-account-create-update-bpt5v" podUID="b198eac9-030c-43fc-ae7d-a59e6bf299a4" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.181517 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.209214 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee945927-3683-4163-ac37-83d894a9569b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.209238 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsd2g\" (UniqueName: \"kubernetes.io/projected/ee945927-3683-4163-ac37-83d894a9569b-kube-api-access-tsd2g\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.209311 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.209365 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts podName:6739e909-eb6b-4578-8436-fa9f24385e0a nodeName:}" failed. No retries permitted until 2026-01-29 11:50:06.209339319 +0000 UTC m=+1743.321732330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts") pod "root-account-create-update-lxhcz" (UID: "6739e909-eb6b-4578-8436-fa9f24385e0a") : configmap "openstack-scripts" not found Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.226346 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.242953 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.243513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.244225 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd3143d-eabf-4163-a7cb-590dc11a2daf" path="/var/lib/kubelet/pods/1dd3143d-eabf-4163-a7cb-590dc11a2daf/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.244595 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbb8794-f929-4bc3-9fc4-fc1e8589691b" path="/var/lib/kubelet/pods/3fbb8794-f929-4bc3-9fc4-fc1e8589691b/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.245117 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55700325-5d09-47fc-adad-06c1a8fbbee4" path="/var/lib/kubelet/pods/55700325-5d09-47fc-adad-06c1a8fbbee4/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.245652 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf90405-9df9-4821-831d-f6bb66f3268e" path="/var/lib/kubelet/pods/7cf90405-9df9-4821-831d-f6bb66f3268e/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.246551 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81700a7f-32e9-45dd-b223-058f4340deb4" path="/var/lib/kubelet/pods/81700a7f-32e9-45dd-b223-058f4340deb4/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.247021 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8be85b-b686-4a64-ab6a-42122b1a995c" path="/var/lib/kubelet/pods/8e8be85b-b686-4a64-ab6a-42122b1a995c/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.249040 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982e76a1-f77f-4569-bb8e-f524dba573ca" path="/var/lib/kubelet/pods/982e76a1-f77f-4569-bb8e-f524dba573ca/volumes" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.276910 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311428 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts\") pod \"fa516723-105a-4ea0-98d7-317538e3d438\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8zpw\" (UniqueName: \"kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311646 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle\") pod \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59l6r\" (UniqueName: \"kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r\") pod \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311763 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311824 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcb9s\" (UniqueName: \"kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311840 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config\") pod \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311867 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311885 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311908 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs\") pod \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\" (UID: \"8162079c-abe4-4e9c-bdd5-2fbb43187e61\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs\") pod \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\" (UID: \"3ed02fac-f569-47e7-a243-6d0e37dc6c05\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.311998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom\") pod \"c0c26286-7e5f-4610-967b-408ad3916918\" (UID: \"c0c26286-7e5f-4610-967b-408ad3916918\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.312026 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kdc\" (UniqueName: \"kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc\") pod \"fa516723-105a-4ea0-98d7-317538e3d438\" (UID: \"fa516723-105a-4ea0-98d7-317538e3d438\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.339717 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs" (OuterVolumeSpecName: "logs") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.391025 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts" (OuterVolumeSpecName: "scripts") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.392906 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa516723-105a-4ea0-98d7-317538e3d438" (UID: "fa516723-105a-4ea0-98d7-317538e3d438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.393715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc" (OuterVolumeSpecName: "kube-api-access-l2kdc") pod "fa516723-105a-4ea0-98d7-317538e3d438" (UID: "fa516723-105a-4ea0-98d7-317538e3d438"). InnerVolumeSpecName "kube-api-access-l2kdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.397715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs" (OuterVolumeSpecName: "logs") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.398740 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "3ed02fac-f569-47e7-a243-6d0e37dc6c05" (UID: "3ed02fac-f569-47e7-a243-6d0e37dc6c05"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.401214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.405973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.428502 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts" (OuterVolumeSpecName: "scripts") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.430867 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw" (OuterVolumeSpecName: "kube-api-access-d8zpw") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "kube-api-access-d8zpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432577 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432619 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432706 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjng\" (UniqueName: \"kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.432806 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs\") pod \"22118bb6-3dd9-41d5-8215-d8e4679828ba\" (UID: \"22118bb6-3dd9-41d5-8215-d8e4679828ba\") " Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433317 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa516723-105a-4ea0-98d7-317538e3d438-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433335 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8zpw\" (UniqueName: \"kubernetes.io/projected/c0c26286-7e5f-4610-967b-408ad3916918-kube-api-access-d8zpw\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433347 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c26286-7e5f-4610-967b-408ad3916918-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433359 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c26286-7e5f-4610-967b-408ad3916918-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433369 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433380 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433391 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433401 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8162079c-abe4-4e9c-bdd5-2fbb43187e61-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433427 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.433440 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kdc\" (UniqueName: \"kubernetes.io/projected/fa516723-105a-4ea0-98d7-317538e3d438-kube-api-access-l2kdc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.439487 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs" (OuterVolumeSpecName: "logs") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.523699 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r" (OuterVolumeSpecName: "kube-api-access-59l6r") pod "3ed02fac-f569-47e7-a243-6d0e37dc6c05" (UID: "3ed02fac-f569-47e7-a243-6d0e37dc6c05"). InnerVolumeSpecName "kube-api-access-59l6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.530324 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s" (OuterVolumeSpecName: "kube-api-access-vcb9s") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "kube-api-access-vcb9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.536274 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59l6r\" (UniqueName: \"kubernetes.io/projected/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-api-access-59l6r\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.536303 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcb9s\" (UniqueName: \"kubernetes.io/projected/8162079c-abe4-4e9c-bdd5-2fbb43187e61-kube-api-access-vcb9s\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.536316 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22118bb6-3dd9-41d5-8215-d8e4679828ba-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.558442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng" (OuterVolumeSpecName: "kube-api-access-ngjng") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "kube-api-access-ngjng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.640073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjng\" (UniqueName: \"kubernetes.io/projected/22118bb6-3dd9-41d5-8215-d8e4679828ba-kube-api-access-ngjng\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.641056 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.647235 4766 generic.go:334] "Generic (PLEG): container finished" podID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerID="14e4b623cc33e1869a58abf1c35db16e3909d3d2a092250a9f93c7d83fa741ec" exitCode=0 Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.649027 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerID="5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" exitCode=0 Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.650681 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.658385 4766 generic.go:334] "Generic (PLEG): container finished" podID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerID="f90c3671694662e6b9f1584abc9bd6ae5dd46f25e77b8df0cd377c69033dc174" exitCode=0 Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.658550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.664902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "3ed02fac-f569-47e7-a243-6d0e37dc6c05" (UID: "3ed02fac-f569-47e7-a243-6d0e37dc6c05"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.671974 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4655576-rzc26" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.694721 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb14-account-create-update-5hrp9" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.696576 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.696729 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.697513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e41-account-create-update-rkj42" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.704596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ed02fac-f569-47e7-a243-6d0e37dc6c05" (UID: "3ed02fac-f569-47e7-a243-6d0e37dc6c05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.738627 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.741987 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.742034 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.742049 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed02fac-f569-47e7-a243-6d0e37dc6c05-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.742062 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.801217 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.819872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.836214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.843943 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdtz\" (UniqueName: \"kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.844043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts\") pod \"keystone-d3fe-account-create-update-bpt5v\" (UID: \"b198eac9-030c-43fc-ae7d-a59e6bf299a4\") " pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.844142 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.844154 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.844165 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.844224 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.844273 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:07.844259539 +0000 UTC m=+1744.956652550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : configmap "openstack-scripts" not found Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.848109 4766 projected.go:194] Error preparing data for projected volume kube-api-access-4tdtz for pod openstack/keystone-d3fe-account-create-update-bpt5v: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:05 crc kubenswrapper[4766]: E0129 11:50:05.848163 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz podName:b198eac9-030c-43fc-ae7d-a59e6bf299a4 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:07.848148831 +0000 UTC m=+1744.960541842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4tdtz" (UniqueName: "kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz") pod "keystone-d3fe-account-create-update-bpt5v" (UID: "b198eac9-030c-43fc-ae7d-a59e6bf299a4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.889600 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.889962 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data" (OuterVolumeSpecName: "config-data") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.905683 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.907185 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.936595 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data" (OuterVolumeSpecName: "config-data") pod "22118bb6-3dd9-41d5-8215-d8e4679828ba" (UID: "22118bb6-3dd9-41d5-8215-d8e4679828ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.948121 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.948152 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.948161 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.948169 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22118bb6-3dd9-41d5-8215-d8e4679828ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.948177 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:05 crc kubenswrapper[4766]: I0129 11:50:05.959895 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data" (OuterVolumeSpecName: "config-data") pod "c0c26286-7e5f-4610-967b-408ad3916918" (UID: "c0c26286-7e5f-4610-967b-408ad3916918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.017869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8162079c-abe4-4e9c-bdd5-2fbb43187e61" (UID: "8162079c-abe4-4e9c-bdd5-2fbb43187e61"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.049994 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c26286-7e5f-4610-967b-408ad3916918-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.050030 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8162079c-abe4-4e9c-bdd5-2fbb43187e61-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.069463 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.069892 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.069905 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.069921 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.069927 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-api" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.069938 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.069944 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-log" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.069984 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.069989 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-api" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.070002 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" containerName="kube-state-metrics" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070009 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" containerName="kube-state-metrics" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.070019 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070025 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.070048 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070054 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070233 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070243 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070276 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" containerName="nova-api-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070284 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-api" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070295 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" containerName="placement-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070304 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api-log" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.070311 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" containerName="kube-state-metrics" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071457 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-hznsj" podUID="d9ea6d98-59cc-4526-bf59-7328c0321f59" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071862 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071903 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ed02fac-f569-47e7-a243-6d0e37dc6c05","Type":"ContainerDied","Data":"c156cf6ad7a4e32c907b5cc716c7585bd7b76367b9d53c43e4e1b9e7645295f3"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071924 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1ffb49-b314-4d31-94d6-de70e35d917e","Type":"ContainerDied","Data":"67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071938 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67055ba92e4a2227007f2b095adf233704d42828da7e147a26d5f7c0ce732daf" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-864fcd46f6-bn7r2" event={"ID":"a084e5b1-d167-4678-8ab9-af72fb1d07fd","Type":"ContainerDied","Data":"729628f99dbe284d0566cd89b7bc6d6668d3c6d4355d51dccc7a5107d775097c"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071956 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="729628f99dbe284d0566cd89b7bc6d6668d3c6d4355d51dccc7a5107d775097c" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerDied","Data":"14e4b623cc33e1869a58abf1c35db16e3909d3d2a092250a9f93c7d83fa741ec"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.071990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d99f4d4-0dab-45de-ac76-7a0fa820c353","Type":"ContainerDied","Data":"5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c26286-7e5f-4610-967b-408ad3916918","Type":"ContainerDied","Data":"e496e4f74d98dd549bc03610d7e5f96b0a9e33f9406ec5d5da1f9e47db52ae8e"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d00d673d-aea5-4014-8e2b-bcb78afb7606","Type":"ContainerDied","Data":"f90c3671694662e6b9f1584abc9bd6ae5dd46f25e77b8df0cd377c69033dc174"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072022 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d00d673d-aea5-4014-8e2b-bcb78afb7606","Type":"ContainerDied","Data":"c9c8fb0afaaf1c81a8af8256095a8dec5f807b9051a457ec08e7a651f681805c"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072030 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c8fb0afaaf1c81a8af8256095a8dec5f807b9051a457ec08e7a651f681805c" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072268 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4655576-rzc26" event={"ID":"8162079c-abe4-4e9c-bdd5-2fbb43187e61","Type":"ContainerDied","Data":"f4e02c5c9ca3232944730e43315de023c249db73a8f920bedf12c540c18cd376"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072296 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22118bb6-3dd9-41d5-8215-d8e4679828ba","Type":"ContainerDied","Data":"12cd75e180e319b7141c9e38bdac39b69a6314db03a5d8b8b1a00ad19283783b"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072254 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.072272 4766 scope.go:117] "RemoveContainer" containerID="98b39f027e94d9b7e2c9e0f75cbec74515a6877539cbea210a05a9de92134411" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.103068 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.103687 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.117804 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.136787 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.136851 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.145246 4766 scope.go:117] "RemoveContainer" containerID="952afb9816e99acbe37c8a9ddc03d82aee8becf7ea80015a22c126ca32f58ff9" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.147273 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153171 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bz26\" (UniqueName: \"kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153291 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153340 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs\") pod \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\" (UID: \"a084e5b1-d167-4678-8ab9-af72fb1d07fd\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153704 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153832 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972w7\" (UniqueName: \"kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.153887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.155287 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.157575 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs" (OuterVolumeSpecName: "logs") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.157707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26" (OuterVolumeSpecName: "kube-api-access-9bz26") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "kube-api-access-9bz26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.187598 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.234813 4766 scope.go:117] "RemoveContainer" containerID="a3a9fbf48c090c048092e1c49334325b7802b39586ef26e6e58e4213960da8d3" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.240756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.250617 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.257965 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config\") pod \"d00d673d-aea5-4014-8e2b-bcb78afb7606\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262788 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs\") pod \"d00d673d-aea5-4014-8e2b-bcb78afb7606\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzbb\" (UniqueName: \"kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262931 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.262958 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data\") pod \"d00d673d-aea5-4014-8e2b-bcb78afb7606\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle\") pod \"d00d673d-aea5-4014-8e2b-bcb78afb7606\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263198 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq4rw\" (UniqueName: \"kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw\") pod \"d00d673d-aea5-4014-8e2b-bcb78afb7606\" (UID: \"d00d673d-aea5-4014-8e2b-bcb78afb7606\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.263219 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs\") pod \"dd1ffb49-b314-4d31-94d6-de70e35d917e\" (UID: \"dd1ffb49-b314-4d31-94d6-de70e35d917e\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.264734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.264967 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-972w7\" (UniqueName: \"kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265161 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a084e5b1-d167-4678-8ab9-af72fb1d07fd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265175 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265187 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bz26\" (UniqueName: \"kubernetes.io/projected/a084e5b1-d167-4678-8ab9-af72fb1d07fd-kube-api-access-9bz26\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265197 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.265825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.266067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data" (OuterVolumeSpecName: "config-data") pod "d00d673d-aea5-4014-8e2b-bcb78afb7606" (UID: "d00d673d-aea5-4014-8e2b-bcb78afb7606"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.266334 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d00d673d-aea5-4014-8e2b-bcb78afb7606" (UID: "d00d673d-aea5-4014-8e2b-bcb78afb7606"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.274719 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs" (OuterVolumeSpecName: "logs") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.280785 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.281623 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts podName:6739e909-eb6b-4578-8436-fa9f24385e0a nodeName:}" failed. No retries permitted until 2026-01-29 11:50:08.281606766 +0000 UTC m=+1745.393999767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts") pod "root-account-create-update-lxhcz" (UID: "6739e909-eb6b-4578-8436-fa9f24385e0a") : configmap "openstack-scripts" not found Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.281347 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.281114 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.290540 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts" (OuterVolumeSpecName: "scripts") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.290629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.290760 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.292909 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.293224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw" (OuterVolumeSpecName: "kube-api-access-cq4rw") pod "d00d673d-aea5-4014-8e2b-bcb78afb7606" (UID: "d00d673d-aea5-4014-8e2b-bcb78afb7606"). InnerVolumeSpecName "kube-api-access-cq4rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.300774 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.300955 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.305618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb" (OuterVolumeSpecName: "kube-api-access-xvzbb") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "kube-api-access-xvzbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.305958 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.306009 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.306329 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.306354 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.307935 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.309777 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.321762 4766 scope.go:117] "RemoveContainer" containerID="6b47eab1e9e54a967ffb6a8dbb5d22f27c753e7cad3329b3e2436f5c3898c7c9" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.322019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-972w7\" (UniqueName: \"kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7\") pod \"community-operators-jgkf2\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.347405 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.354032 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.361048 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.361484 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.362491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data" (OuterVolumeSpecName: "config-data") pod "a084e5b1-d167-4678-8ab9-af72fb1d07fd" (UID: "a084e5b1-d167-4678-8ab9-af72fb1d07fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366431 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2ff7\" (UniqueName: \"kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366531 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zdqb\" (UniqueName: \"kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb\") pod \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data\") pod \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle\") pod \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs\") pod \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366688 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs\") pod \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\" (UID: \"34a8c513-ef7f-49ce-a0d8-2d9351abca2a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.366840 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data\") pod \"1996793f-f3ca-4559-97d6-867f0d0a2b61\" (UID: \"1996793f-f3ca-4559-97d6-867f0d0a2b61\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367565 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367589 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367599 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvzbb\" (UniqueName: \"kubernetes.io/projected/dd1ffb49-b314-4d31-94d6-de70e35d917e-kube-api-access-xvzbb\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367609 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367617 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367626 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d00d673d-aea5-4014-8e2b-bcb78afb7606-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367634 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367642 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367651 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1ffb49-b314-4d31-94d6-de70e35d917e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367659 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a084e5b1-d167-4678-8ab9-af72fb1d07fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.367667 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq4rw\" (UniqueName: \"kubernetes.io/projected/d00d673d-aea5-4014-8e2b-bcb78afb7606-kube-api-access-cq4rw\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.376564 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.377273 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs" (OuterVolumeSpecName: "logs") pod "34a8c513-ef7f-49ce-a0d8-2d9351abca2a" (UID: "34a8c513-ef7f-49ce-a0d8-2d9351abca2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.380497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs" (OuterVolumeSpecName: "logs") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.380778 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.392339 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts" (OuterVolumeSpecName: "scripts") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.411260 4766 scope.go:117] "RemoveContainer" containerID="8b8626b814bdc9ebbe0eb6d6c45744653225b6c9c53cd0a3325216664d30e4d6" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.419661 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb" (OuterVolumeSpecName: "kube-api-access-5zdqb") pod "34a8c513-ef7f-49ce-a0d8-2d9351abca2a" (UID: "34a8c513-ef7f-49ce-a0d8-2d9351abca2a"). InnerVolumeSpecName "kube-api-access-5zdqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.420141 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7" (OuterVolumeSpecName: "kube-api-access-n2ff7") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "kube-api-access-n2ff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.428268 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.440166 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.444325 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.451785 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d00d673d-aea5-4014-8e2b-bcb78afb7606" (UID: "d00d673d-aea5-4014-8e2b-bcb78afb7606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh\") pod \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle\") pod \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469285 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data\") pod \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\" (UID: \"7d99f4d4-0dab-45de-ac76-7a0fa820c353\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469630 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469646 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469655 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2ff7\" (UniqueName: \"kubernetes.io/projected/1996793f-f3ca-4559-97d6-867f0d0a2b61-kube-api-access-n2ff7\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469664 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zdqb\" (UniqueName: \"kubernetes.io/projected/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-kube-api-access-5zdqb\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469684 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469693 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469701 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469710 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.469718 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1996793f-f3ca-4559-97d6-867f0d0a2b61-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.470714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.470775 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 11:50:06 crc kubenswrapper[4766]: E0129 11:50:06.470816 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data podName:b77b577e-b980-46fb-945a-a0b57e3bdc17 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:14.470801658 +0000 UTC m=+1751.583194669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data") pod "rabbitmq-server-0" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17") : configmap "rabbitmq-config-data" not found Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.473952 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.496881 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.506580 4766 scope.go:117] "RemoveContainer" containerID="b58d99a157c04c8d7ed139d5f25b0d83bc497ae3ae246723524e597810fc69f5" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.515082 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4e41-account-create-update-rkj42"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.519814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh" (OuterVolumeSpecName: "kube-api-access-pglrh") pod "7d99f4d4-0dab-45de-ac76-7a0fa820c353" (UID: "7d99f4d4-0dab-45de-ac76-7a0fa820c353"). InnerVolumeSpecName "kube-api-access-pglrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.553596 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.558134 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxhcz" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.568589 4766 scope.go:117] "RemoveContainer" containerID="5eccd5688bd3f21bd7b9ab6b4fa9bc25010dd2b9cb4c6d665db537e3ffb66b72" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.571352 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.571381 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/7d99f4d4-0dab-45de-ac76-7a0fa820c353-kube-api-access-pglrh\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.562859 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fb14-account-create-update-5hrp9"] Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.582396 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data" (OuterVolumeSpecName: "config-data") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.595979 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data" (OuterVolumeSpecName: "config-data") pod "34a8c513-ef7f-49ce-a0d8-2d9351abca2a" (UID: "34a8c513-ef7f-49ce-a0d8-2d9351abca2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.617916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.646285 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34a8c513-ef7f-49ce-a0d8-2d9351abca2a" (UID: "34a8c513-ef7f-49ce-a0d8-2d9351abca2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.661010 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "d00d673d-aea5-4014-8e2b-bcb78afb7606" (UID: "d00d673d-aea5-4014-8e2b-bcb78afb7606"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.674637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lh6n\" (UniqueName: \"kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n\") pod \"6739e909-eb6b-4578-8436-fa9f24385e0a\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.674871 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts\") pod \"6739e909-eb6b-4578-8436-fa9f24385e0a\" (UID: \"6739e909-eb6b-4578-8436-fa9f24385e0a\") " Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.684211 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.684210 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n" (OuterVolumeSpecName: "kube-api-access-2lh6n") pod "6739e909-eb6b-4578-8436-fa9f24385e0a" (UID: "6739e909-eb6b-4578-8436-fa9f24385e0a"). InnerVolumeSpecName "kube-api-access-2lh6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.687807 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6739e909-eb6b-4578-8436-fa9f24385e0a" (UID: "6739e909-eb6b-4578-8436-fa9f24385e0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707814 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707856 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lh6n\" (UniqueName: \"kubernetes.io/projected/6739e909-eb6b-4578-8436-fa9f24385e0a-kube-api-access-2lh6n\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707870 4766 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00d673d-aea5-4014-8e2b-bcb78afb7606-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707882 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707895 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707909 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707922 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.707938 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6739e909-eb6b-4578-8436-fa9f24385e0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.708041 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data" (OuterVolumeSpecName: "config-data") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.726054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1996793f-f3ca-4559-97d6-867f0d0a2b61" (UID: "1996793f-f3ca-4559-97d6-867f0d0a2b61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.729018 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxhcz" event={"ID":"6739e909-eb6b-4578-8436-fa9f24385e0a","Type":"ContainerDied","Data":"125c849041b9198229cb48d4edea827510d588b563f18cc0b1fa1075043e99f2"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.729262 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxhcz" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.730138 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d99f4d4-0dab-45de-ac76-7a0fa820c353" (UID: "7d99f4d4-0dab-45de-ac76-7a0fa820c353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.731238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data" (OuterVolumeSpecName: "config-data") pod "7d99f4d4-0dab-45de-ac76-7a0fa820c353" (UID: "7d99f4d4-0dab-45de-ac76-7a0fa820c353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.734122 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d99f4d4-0dab-45de-ac76-7a0fa820c353","Type":"ContainerDied","Data":"8ee6be214b93301929773558e83d4a1dad288de13179e2d0f6044a33f74ce1bb"} Jan 29 11:50:06 crc kubenswrapper[4766]: I0129 11:50:06.734273 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.769781 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c6eae2b-18a8-4a82-95e2-4940490b1678","Type":"ContainerDied","Data":"004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.769752 4766 generic.go:334] "Generic (PLEG): container finished" podID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerID="004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" exitCode=0 Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.772760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "34a8c513-ef7f-49ce-a0d8-2d9351abca2a" (UID: "34a8c513-ef7f-49ce-a0d8-2d9351abca2a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.778791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a8c513-ef7f-49ce-a0d8-2d9351abca2a","Type":"ContainerDied","Data":"d728857aa1e8a906feb7e587b81596585c17a64aa26c46c6abdaf249a2e3dadd"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.778878 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.781293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd1ffb49-b314-4d31-94d6-de70e35d917e" (UID: "dd1ffb49-b314-4d31-94d6-de70e35d917e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.785228 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.788597 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.792903 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.794480 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-864fcd46f6-bn7r2" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.797441 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.797535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1996793f-f3ca-4559-97d6-867f0d0a2b61","Type":"ContainerDied","Data":"d714fa11d46ffb614c3c30a66523c9e5b6c471d6f8471c3847aee783b2cb5d33"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.797578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d3fe-account-create-update-bpt5v" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.809018 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810853 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810891 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810903 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34a8c513-ef7f-49ce-a0d8-2d9351abca2a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810911 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99f4d4-0dab-45de-ac76-7a0fa820c353-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810921 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1ffb49-b314-4d31-94d6-de70e35d917e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.810929 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1996793f-f3ca-4559-97d6-867f0d0a2b61-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.822834 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.831488 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7ff4655576-rzc26"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.963274 4766 scope.go:117] "RemoveContainer" containerID="f107edeb0cf35d80499dbe98e5cf3636533bd7951938e89d6e981b29197977de" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.967307 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.974950 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:06.998513 4766 scope.go:117] "RemoveContainer" containerID="5c1419dc2d71a7c2e968757b6bca4a6964a8725001bdd981f617c2e2f489b856" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.021745 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.031743 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.039177 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.045275 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.063469 4766 scope.go:117] "RemoveContainer" containerID="6540ff6aadfe105654848b099a8bef21fce6c3bc83bf18acea31d173e8986a0b" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.066452 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.082385 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.095348 4766 scope.go:117] "RemoveContainer" containerID="ca450350b6e568d52d4063cfee6673c0157620922fe751480913c07db96dc186" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.108050 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.117870 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lxhcz"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120646 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120691 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120817 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbn4j\" (UniqueName: \"kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j\") pod \"6c6eae2b-18a8-4a82-95e2-4940490b1678\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120901 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data\") pod \"6c6eae2b-18a8-4a82-95e2-4940490b1678\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.120935 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle\") pod \"6c6eae2b-18a8-4a82-95e2-4940490b1678\" (UID: \"6c6eae2b-18a8-4a82-95e2-4940490b1678\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.121000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8l8\" (UniqueName: \"kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.121042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data\") pod \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\" (UID: \"0e9e7d37-60ae-4489-a69a-e4168eb87cf2\") " Jan 29 11:50:07 crc kubenswrapper[4766]: E0129 11:50:07.121606 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:07 crc kubenswrapper[4766]: E0129 11:50:07.121657 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data podName:ace2f6ec-cf57-4742-82e9-e13fd230bb69 nodeName:}" failed. No retries permitted until 2026-01-29 11:50:15.121639988 +0000 UTC m=+1752.234032999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data") pod "rabbitmq-cell1-server-0" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69") : configmap "rabbitmq-cell1-config-data" not found Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.123453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.132258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8" (OuterVolumeSpecName: "kube-api-access-dv8l8") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "kube-api-access-dv8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.133939 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.135737 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.140708 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts" (OuterVolumeSpecName: "scripts") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.144739 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.145703 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j" (OuterVolumeSpecName: "kube-api-access-zbn4j") pod "6c6eae2b-18a8-4a82-95e2-4940490b1678" (UID: "6c6eae2b-18a8-4a82-95e2-4940490b1678"). InnerVolumeSpecName "kube-api-access-zbn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.151802 4766 scope.go:117] "RemoveContainer" containerID="7757bdf84a1a20ce16552c3e15762e105f6f1602c859ce9e79be4ff4bbd3a36d" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.152303 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.160335 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.168840 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6eae2b-18a8-4a82-95e2-4940490b1678" (UID: "6c6eae2b-18a8-4a82-95e2-4940490b1678"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.170272 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.186731 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.189213 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data" (OuterVolumeSpecName: "config-data") pod "6c6eae2b-18a8-4a82-95e2-4940490b1678" (UID: "6c6eae2b-18a8-4a82-95e2-4940490b1678"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.192316 4766 scope.go:117] "RemoveContainer" containerID="5fa3e2236ec63b27db194527bb716839b21f9cea6f579d3762f4f41dced8ddd1" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.195220 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d3fe-account-create-update-bpt5v"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.202131 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d3fe-account-create-update-bpt5v"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.207971 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.214046 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-864fcd46f6-bn7r2"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.222679 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223004 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223018 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223031 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbn4j\" (UniqueName: \"kubernetes.io/projected/6c6eae2b-18a8-4a82-95e2-4940490b1678-kube-api-access-zbn4j\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223043 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223055 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223065 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae2b-18a8-4a82-95e2-4940490b1678-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.223075 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8l8\" (UniqueName: \"kubernetes.io/projected/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-kube-api-access-dv8l8\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.238989 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" path="/var/lib/kubelet/pods/1996793f-f3ca-4559-97d6-867f0d0a2b61/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.240082 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22118bb6-3dd9-41d5-8215-d8e4679828ba" path="/var/lib/kubelet/pods/22118bb6-3dd9-41d5-8215-d8e4679828ba/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.241644 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" path="/var/lib/kubelet/pods/34a8c513-ef7f-49ce-a0d8-2d9351abca2a/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.242589 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed02fac-f569-47e7-a243-6d0e37dc6c05" path="/var/lib/kubelet/pods/3ed02fac-f569-47e7-a243-6d0e37dc6c05/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.243468 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" path="/var/lib/kubelet/pods/6739e909-eb6b-4578-8436-fa9f24385e0a/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.244803 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" path="/var/lib/kubelet/pods/7d99f4d4-0dab-45de-ac76-7a0fa820c353/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.245607 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8162079c-abe4-4e9c-bdd5-2fbb43187e61" path="/var/lib/kubelet/pods/8162079c-abe4-4e9c-bdd5-2fbb43187e61/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.245965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data" (OuterVolumeSpecName: "config-data") pod "0e9e7d37-60ae-4489-a69a-e4168eb87cf2" (UID: "0e9e7d37-60ae-4489-a69a-e4168eb87cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.246277 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" path="/var/lib/kubelet/pods/a084e5b1-d167-4678-8ab9-af72fb1d07fd/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.247147 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b198eac9-030c-43fc-ae7d-a59e6bf299a4" path="/var/lib/kubelet/pods/b198eac9-030c-43fc-ae7d-a59e6bf299a4/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.247646 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c26286-7e5f-4610-967b-408ad3916918" path="/var/lib/kubelet/pods/c0c26286-7e5f-4610-967b-408ad3916918/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.248642 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" path="/var/lib/kubelet/pods/d00d673d-aea5-4014-8e2b-bcb78afb7606/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.249879 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" path="/var/lib/kubelet/pods/dd1ffb49-b314-4d31-94d6-de70e35d917e/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.250501 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee945927-3683-4163-ac37-83d894a9569b" path="/var/lib/kubelet/pods/ee945927-3683-4163-ac37-83d894a9569b/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.250886 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa516723-105a-4ea0-98d7-317538e3d438" path="/var/lib/kubelet/pods/fa516723-105a-4ea0-98d7-317538e3d438/volumes" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.324621 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b198eac9-030c-43fc-ae7d-a59e6bf299a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.324651 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tdtz\" (UniqueName: \"kubernetes.io/projected/b198eac9-030c-43fc-ae7d-a59e6bf299a4-kube-api-access-4tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.324663 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7d37-60ae-4489-a69a-e4168eb87cf2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.804177 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.804188 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e9e7d37-60ae-4489-a69a-e4168eb87cf2","Type":"ContainerDied","Data":"6ee9c30a2b64eec16ccf6d3b12b79e122359a7365e95e868d280a6f09522ec08"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.804270 4766 scope.go:117] "RemoveContainer" containerID="0a988c9f46e3a70b4049e9abe888a41821aad0a9143a7ab9d80be40f836fe69e" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.805716 4766 generic.go:334] "Generic (PLEG): container finished" podID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerID="81f89abef5c9ff0ed76588cc8797d021673aa15a99156bcbfe83b47af9618c73" exitCode=0 Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.805741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerDied","Data":"81f89abef5c9ff0ed76588cc8797d021673aa15a99156bcbfe83b47af9618c73"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.817482 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9453e394-ed9c-4d36-b200-e559e620a7f7/ovn-northd/0.log" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.817538 4766 generic.go:334] "Generic (PLEG): container finished" podID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerID="587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" exitCode=139 Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.817623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerDied","Data":"587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.823129 4766 generic.go:334] "Generic (PLEG): container finished" podID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" containerID="9a99c0592d77644bf5b6f77afc5cf7aaa5c3a2e758cf41c91b1d8d6f29b64745" exitCode=0 Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.823182 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6757d49457-dctc6" event={"ID":"0607cc62-49d5-4a25-b4ad-636cae5d1e7e","Type":"ContainerDied","Data":"9a99c0592d77644bf5b6f77afc5cf7aaa5c3a2e758cf41c91b1d8d6f29b64745"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.841894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c6eae2b-18a8-4a82-95e2-4940490b1678","Type":"ContainerDied","Data":"1a08eda308e5b86589b10f48609b53f6b94314846217f23b59d88ddacddf3fc1"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.842028 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.846503 4766 scope.go:117] "RemoveContainer" containerID="14e4b623cc33e1869a58abf1c35db16e3909d3d2a092250a9f93c7d83fa741ec" Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.852845 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.853822 4766 generic.go:334] "Generic (PLEG): container finished" podID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerID="ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f" exitCode=0 Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.853858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerDied","Data":"ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.853882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerStarted","Data":"8074e261798ad06400579298c25d46c50219abbf731dc618fdaa85993c8c9cc3"} Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.860890 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.868120 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.879471 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:50:07 crc kubenswrapper[4766]: I0129 11:50:07.884301 4766 scope.go:117] "RemoveContainer" containerID="004bd341daa79bad3d15e54cc1bb127c54401c3d66802d245e39b218f040695f" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.472846 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9453e394-ed9c-4d36-b200-e559e620a7f7/ovn-northd/0.log" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.473111 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.480235 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.483513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlbqg\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665081 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665215 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665257 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhtcg\" (UniqueName: \"kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665272 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvrrd\" (UniqueName: \"kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665448 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665517 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665535 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665616 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665688 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts\") pod \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\" (UID: \"0607cc62-49d5-4a25-b4ad-636cae5d1e7e\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665725 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins\") pod \"b77b577e-b980-46fb-945a-a0b57e3bdc17\" (UID: \"b77b577e-b980-46fb-945a-a0b57e3bdc17\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.665741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts\") pod \"9453e394-ed9c-4d36-b200-e559e620a7f7\" (UID: \"9453e394-ed9c-4d36-b200-e559e620a7f7\") " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.666691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts" (OuterVolumeSpecName: "scripts") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.671829 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.672336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg" (OuterVolumeSpecName: "kube-api-access-hlbqg") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "kube-api-access-hlbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.674609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.675609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config" (OuterVolumeSpecName: "config") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.676102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.676120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.677907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.679538 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info" (OuterVolumeSpecName: "pod-info") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.679652 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.693578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.693658 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd" (OuterVolumeSpecName: "kube-api-access-vvrrd") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "kube-api-access-vvrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.694980 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg" (OuterVolumeSpecName: "kube-api-access-mhtcg") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "kube-api-access-mhtcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.696067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts" (OuterVolumeSpecName: "scripts") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.696619 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.697865 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.733531 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data" (OuterVolumeSpecName: "config-data") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.733761 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.737573 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.740697 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data" (OuterVolumeSpecName: "config-data") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.742491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf" (OuterVolumeSpecName: "server-conf") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.755823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767772 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767808 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767817 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767826 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767835 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b77b577e-b980-46fb-945a-a0b57e3bdc17-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767842 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767850 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767860 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767868 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767876 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlbqg\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-kube-api-access-hlbqg\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767885 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b77b577e-b980-46fb-945a-a0b57e3bdc17-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767893 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9453e394-ed9c-4d36-b200-e559e620a7f7-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767901 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767908 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767916 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhtcg\" (UniqueName: \"kubernetes.io/projected/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-kube-api-access-mhtcg\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767924 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767933 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767941 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvrrd\" (UniqueName: \"kubernetes.io/projected/9453e394-ed9c-4d36-b200-e559e620a7f7-kube-api-access-vvrrd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767949 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767957 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77b577e-b980-46fb-945a-a0b57e3bdc17-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767983 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.767993 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.785100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0607cc62-49d5-4a25-b4ad-636cae5d1e7e" (UID: "0607cc62-49d5-4a25-b4ad-636cae5d1e7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.786648 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.804919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.809968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9453e394-ed9c-4d36-b200-e559e620a7f7" (UID: "9453e394-ed9c-4d36-b200-e559e620a7f7"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.833708 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b77b577e-b980-46fb-945a-a0b57e3bdc17" (UID: "b77b577e-b980-46fb-945a-a0b57e3bdc17"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.869118 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.869165 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0607cc62-49d5-4a25-b4ad-636cae5d1e7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.869178 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b77b577e-b980-46fb-945a-a0b57e3bdc17-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.869188 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.869201 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9453e394-ed9c-4d36-b200-e559e620a7f7-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.938634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6757d49457-dctc6" event={"ID":"0607cc62-49d5-4a25-b4ad-636cae5d1e7e","Type":"ContainerDied","Data":"c6261f71b81e62cd6a6850c7a8333dc90251566642ff031959560a0904f6f6d2"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.938702 4766 scope.go:117] "RemoveContainer" containerID="9a99c0592d77644bf5b6f77afc5cf7aaa5c3a2e758cf41c91b1d8d6f29b64745" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.938844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6757d49457-dctc6" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.950977 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9453e394-ed9c-4d36-b200-e559e620a7f7/ovn-northd/0.log" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.951147 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.951207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9453e394-ed9c-4d36-b200-e559e620a7f7","Type":"ContainerDied","Data":"9bf188dd9af5f4187ffaf61b998ad1081a1310d3088ae50894cf97eac20e1e2c"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.955532 4766 generic.go:334] "Generic (PLEG): container finished" podID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerID="a5c49449e84d148200b6f0a47a8ec23b2f77e9135152810c5d0bbabc622713e8" exitCode=0 Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.955598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerDied","Data":"a5c49449e84d148200b6f0a47a8ec23b2f77e9135152810c5d0bbabc622713e8"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.955628 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67f655d9dc-95fxw" event={"ID":"e238ce2e-9a21-43c5-94c2-0a31ab078c79","Type":"ContainerDied","Data":"af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.955640 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5ccad1b02071aef2bdf3859a959e621847797164538b3b6ab1737c1f33fbbd" Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.969719 4766 generic.go:334] "Generic (PLEG): container finished" podID="15805cd2-3301-4e59-8c66-adde53408809" containerID="a1a9a79ccf506d864099d855f636208585fac69df49e6476e65b408773389289" exitCode=0 Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.969798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerDied","Data":"a1a9a79ccf506d864099d855f636208585fac69df49e6476e65b408773389289"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.978442 4766 generic.go:334] "Generic (PLEG): container finished" podID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerID="52b9aceb7fcf91e3fea6020d24cd2f5e816f8e95a93472d9f4a950055b986415" exitCode=0 Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.978539 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerDied","Data":"52b9aceb7fcf91e3fea6020d24cd2f5e816f8e95a93472d9f4a950055b986415"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.989635 4766 generic.go:334] "Generic (PLEG): container finished" podID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerID="07c7e43f4c233bc15a95251cad07a884a33a05f78743bb5a3c6f01f63b880784" exitCode=0 Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.989713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerDied","Data":"07c7e43f4c233bc15a95251cad07a884a33a05f78743bb5a3c6f01f63b880784"} Jan 29 11:50:08 crc kubenswrapper[4766]: I0129 11:50:08.990366 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.001490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b77b577e-b980-46fb-945a-a0b57e3bdc17","Type":"ContainerDied","Data":"15ef1f5966922a37eda2628875b1ee98ab9d0b61f0383424299889a86ad47c85"} Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.008573 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.022474 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.029532 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6757d49457-dctc6"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.072617 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.073267 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjhcp\" (UniqueName: \"kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp\") pod \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.073350 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data\") pod \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.073401 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom\") pod \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.073551 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle\") pod \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.073588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs\") pod \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\" (UID: \"e238ce2e-9a21-43c5-94c2-0a31ab078c79\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.075486 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs" (OuterVolumeSpecName: "logs") pod "e238ce2e-9a21-43c5-94c2-0a31ab078c79" (UID: "e238ce2e-9a21-43c5-94c2-0a31ab078c79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.078715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp" (OuterVolumeSpecName: "kube-api-access-xjhcp") pod "e238ce2e-9a21-43c5-94c2-0a31ab078c79" (UID: "e238ce2e-9a21-43c5-94c2-0a31ab078c79"). InnerVolumeSpecName "kube-api-access-xjhcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.086780 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.099837 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e238ce2e-9a21-43c5-94c2-0a31ab078c79" (UID: "e238ce2e-9a21-43c5-94c2-0a31ab078c79"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.127509 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.143714 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.148972 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e238ce2e-9a21-43c5-94c2-0a31ab078c79" (UID: "e238ce2e-9a21-43c5-94c2-0a31ab078c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.163150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data" (OuterVolumeSpecName: "config-data") pod "e238ce2e-9a21-43c5-94c2-0a31ab078c79" (UID: "e238ce2e-9a21-43c5-94c2-0a31ab078c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.176012 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.176040 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e238ce2e-9a21-43c5-94c2-0a31ab078c79-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.176050 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjhcp\" (UniqueName: \"kubernetes.io/projected/e238ce2e-9a21-43c5-94c2-0a31ab078c79-kube-api-access-xjhcp\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.176058 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.176068 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e238ce2e-9a21-43c5-94c2-0a31ab078c79-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.236927 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" path="/var/lib/kubelet/pods/0607cc62-49d5-4a25-b4ad-636cae5d1e7e/volumes" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.237511 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" path="/var/lib/kubelet/pods/0e9e7d37-60ae-4489-a69a-e4168eb87cf2/volumes" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.238595 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" path="/var/lib/kubelet/pods/6c6eae2b-18a8-4a82-95e2-4940490b1678/volumes" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.239783 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" path="/var/lib/kubelet/pods/9453e394-ed9c-4d36-b200-e559e620a7f7/volumes" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.240623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" path="/var/lib/kubelet/pods/b77b577e-b980-46fb-945a-a0b57e3bdc17/volumes" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.278625 4766 scope.go:117] "RemoveContainer" containerID="7859ad51abd1137169edd5bae5e4945e15a36ed89a66747e6ac27ac9476ded8b" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.389008 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.397184 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c0c26286-7e5f-4610-967b-408ad3916918" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.159:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485117 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6btz\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485166 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485252 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485370 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.485465 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data\") pod \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\" (UID: \"ace2f6ec-cf57-4742-82e9-e13fd230bb69\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.486627 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.491763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.498764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.509797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz" (OuterVolumeSpecName: "kube-api-access-w6btz") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "kube-api-access-w6btz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.509797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.509836 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.510147 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info" (OuterVolumeSpecName: "pod-info") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.511535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.527656 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data" (OuterVolumeSpecName: "config-data") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.530298 4766 scope.go:117] "RemoveContainer" containerID="587fc7f2d8e47e6824d572c88379e7c339ef834f4f2a33713d946a1ea350ea67" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.581173 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf" (OuterVolumeSpecName: "server-conf") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587552 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace2f6ec-cf57-4742-82e9-e13fd230bb69-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587584 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587595 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6btz\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-kube-api-access-w6btz\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587604 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587614 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587624 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace2f6ec-cf57-4742-82e9-e13fd230bb69-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587645 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587654 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace2f6ec-cf57-4742-82e9-e13fd230bb69-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587662 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.587670 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.604786 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.680651 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ace2f6ec-cf57-4742-82e9-e13fd230bb69" (UID: "ace2f6ec-cf57-4742-82e9-e13fd230bb69"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.690397 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace2f6ec-cf57-4742-82e9-e13fd230bb69-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.690463 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.743594 4766 scope.go:117] "RemoveContainer" containerID="81f89abef5c9ff0ed76588cc8797d021673aa15a99156bcbfe83b47af9618c73" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.804248 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.807643 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.879628 4766 scope.go:117] "RemoveContainer" containerID="a7bd65c4cb6402ca31a9d412ea5ab09924e3681dbdd63afcca07deade4b71a0b" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.897845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8scn\" (UniqueName: \"kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.897901 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.897954 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.897990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data\") pod \"15805cd2-3301-4e59-8c66-adde53408809\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898013 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle\") pod \"15805cd2-3301-4e59-8c66-adde53408809\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs\") pod \"15805cd2-3301-4e59-8c66-adde53408809\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898140 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898183 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898221 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9xpr\" (UniqueName: \"kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr\") pod \"15805cd2-3301-4e59-8c66-adde53408809\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898238 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom\") pod \"15805cd2-3301-4e59-8c66-adde53408809\" (UID: \"15805cd2-3301-4e59-8c66-adde53408809\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.898275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle\") pod \"6bb21068-54ea-4e08-b03e-5186a35d7a09\" (UID: \"6bb21068-54ea-4e08-b03e-5186a35d7a09\") " Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.901009 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.901117 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.901585 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs" (OuterVolumeSpecName: "logs") pod "15805cd2-3301-4e59-8c66-adde53408809" (UID: "15805cd2-3301-4e59-8c66-adde53408809"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.909829 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts" (OuterVolumeSpecName: "scripts") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.929218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15805cd2-3301-4e59-8c66-adde53408809" (UID: "15805cd2-3301-4e59-8c66-adde53408809"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.929263 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr" (OuterVolumeSpecName: "kube-api-access-p9xpr") pod "15805cd2-3301-4e59-8c66-adde53408809" (UID: "15805cd2-3301-4e59-8c66-adde53408809"). InnerVolumeSpecName "kube-api-access-p9xpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.929302 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn" (OuterVolumeSpecName: "kube-api-access-c8scn") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "kube-api-access-c8scn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.974445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.984027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15805cd2-3301-4e59-8c66-adde53408809" (UID: "15805cd2-3301-4e59-8c66-adde53408809"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.985302 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.993568 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data" (OuterVolumeSpecName: "config-data") pod "15805cd2-3301-4e59-8c66-adde53408809" (UID: "15805cd2-3301-4e59-8c66-adde53408809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.999725 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:09 crc kubenswrapper[4766]: I0129 11:50:09.999935 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000027 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000105 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000188 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000265 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15805cd2-3301-4e59-8c66-adde53408809-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000346 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bb21068-54ea-4e08-b03e-5186a35d7a09-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000433 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9xpr\" (UniqueName: \"kubernetes.io/projected/15805cd2-3301-4e59-8c66-adde53408809-kube-api-access-p9xpr\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000521 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15805cd2-3301-4e59-8c66-adde53408809-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000577 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.000638 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8scn\" (UniqueName: \"kubernetes.io/projected/6bb21068-54ea-4e08-b03e-5186a35d7a09-kube-api-access-c8scn\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.040692 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.048183 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.050975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2" event={"ID":"15805cd2-3301-4e59-8c66-adde53408809","Type":"ContainerDied","Data":"2d40d22f92c6bf5e4e4bbdc1538e2654c36c7803cc9e2cfa8fd51a1d59aff90a"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.051103 4766 scope.go:117] "RemoveContainer" containerID="a1a9a79ccf506d864099d855f636208585fac69df49e6476e65b408773389289" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.077971 4766 scope.go:117] "RemoveContainer" containerID="d3a5a4ab1f26a3b0ec0c993790441804f0c92c85eb73ffb26bede23ff956c81f" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.078932 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bb21068-54ea-4e08-b03e-5186a35d7a09","Type":"ContainerDied","Data":"ecc719b1ae0da94bd5cfb0ed8cb0d8200f9aea70a7edba99fe084d6f34924892"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.079025 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.087325 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.087485 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerID="a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5" exitCode=0 Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.087531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerDied","Data":"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.087555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d49f9cb5-5nhnk" event={"ID":"dd5d6aa7-be8d-4439-a4d3-70272705cc2f","Type":"ContainerDied","Data":"4bd51f8fc6cbb5c97ab7a620778c60102dbdb8107a6b5356f9f3716bb900c2a3"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.093817 4766 generic.go:334] "Generic (PLEG): container finished" podID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" exitCode=0 Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.093875 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7245aebe-fe32-42fc-a489-c38b15bb4308","Type":"ContainerDied","Data":"975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.093901 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7245aebe-fe32-42fc-a489-c38b15bb4308","Type":"ContainerDied","Data":"56dfdc813b8eb062b0e7e1f06ffe05c412ada4815e1d237ac127cb390912981c"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.093913 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56dfdc813b8eb062b0e7e1f06ffe05c412ada4815e1d237ac127cb390912981c" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.094567 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data" (OuterVolumeSpecName: "config-data") pod "6bb21068-54ea-4e08-b03e-5186a35d7a09" (UID: "6bb21068-54ea-4e08-b03e-5186a35d7a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.099157 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.101800 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.101822 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb21068-54ea-4e08-b03e-5186a35d7a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.103848 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67f655d9dc-95fxw" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.104488 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.108749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ace2f6ec-cf57-4742-82e9-e13fd230bb69","Type":"ContainerDied","Data":"3740649b0c5c9f9d1f5ab11a33af00c8c252eee44743049d87a9d18daa6871f8"} Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.110624 4766 scope.go:117] "RemoveContainer" containerID="2f0fdc25b25c46bbd38ca0f02d558f7c1d71098932ed72e4a7f35d5b8f371421" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.112048 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.136730 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-65cd6d7bdb-jmsw2"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.140814 4766 scope.go:117] "RemoveContainer" containerID="10926e24d0436cca11a58b6675241744a47408f25fc95907f65bdb78e9c1e372" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.169441 4766 scope.go:117] "RemoveContainer" containerID="52b9aceb7fcf91e3fea6020d24cd2f5e816f8e95a93472d9f4a950055b986415" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.184445 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.191272 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.199305 4766 scope.go:117] "RemoveContainer" containerID="7d416bce038e327e0ac5e80d025af4b30f549a9927b2ce75a0d83f38a53e6163" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.202845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.202899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data\") pod \"7245aebe-fe32-42fc-a489-c38b15bb4308\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.202947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.202971 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6t2f\" (UniqueName: \"kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle\") pod \"7245aebe-fe32-42fc-a489-c38b15bb4308\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203119 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5s7\" (UniqueName: \"kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7\") pod \"7245aebe-fe32-42fc-a489-c38b15bb4308\" (UID: \"7245aebe-fe32-42fc-a489-c38b15bb4308\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.203177 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config\") pod \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\" (UID: \"dd5d6aa7-be8d-4439-a4d3-70272705cc2f\") " Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.204084 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.210482 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.222195 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f" (OuterVolumeSpecName: "kube-api-access-c6t2f") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "kube-api-access-c6t2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.222324 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7" (OuterVolumeSpecName: "kube-api-access-qq5s7") pod "7245aebe-fe32-42fc-a489-c38b15bb4308" (UID: "7245aebe-fe32-42fc-a489-c38b15bb4308"). InnerVolumeSpecName "kube-api-access-qq5s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.225143 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:50:10 crc kubenswrapper[4766]: E0129 11:50:10.225571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.226674 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-67f655d9dc-95fxw"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.231275 4766 scope.go:117] "RemoveContainer" containerID="d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.250584 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.258262 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data" (OuterVolumeSpecName: "config-data") pod "7245aebe-fe32-42fc-a489-c38b15bb4308" (UID: "7245aebe-fe32-42fc-a489-c38b15bb4308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.267397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7245aebe-fe32-42fc-a489-c38b15bb4308" (UID: "7245aebe-fe32-42fc-a489-c38b15bb4308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.274645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.284626 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.286758 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config" (OuterVolumeSpecName: "config") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.302423 4766 scope.go:117] "RemoveContainer" containerID="a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.303646 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dd5d6aa7-be8d-4439-a4d3-70272705cc2f" (UID: "dd5d6aa7-be8d-4439-a4d3-70272705cc2f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304489 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304510 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq5s7\" (UniqueName: \"kubernetes.io/projected/7245aebe-fe32-42fc-a489-c38b15bb4308-kube-api-access-qq5s7\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304521 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304529 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304539 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304546 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7245aebe-fe32-42fc-a489-c38b15bb4308-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304554 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304562 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304570 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.304578 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6t2f\" (UniqueName: \"kubernetes.io/projected/dd5d6aa7-be8d-4439-a4d3-70272705cc2f-kube-api-access-c6t2f\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.324559 4766 scope.go:117] "RemoveContainer" containerID="d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496" Jan 29 11:50:10 crc kubenswrapper[4766]: E0129 11:50:10.324957 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496\": container with ID starting with d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496 not found: ID does not exist" containerID="d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.324989 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496"} err="failed to get container status \"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496\": rpc error: code = NotFound desc = could not find container \"d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496\": container with ID starting with d8dbef2524f2542af763a7cb33a1638c422019cf0cf86edf0a6139eede756496 not found: ID does not exist" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.325012 4766 scope.go:117] "RemoveContainer" containerID="a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5" Jan 29 11:50:10 crc kubenswrapper[4766]: E0129 11:50:10.325553 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5\": container with ID starting with a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5 not found: ID does not exist" containerID="a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.325815 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5"} err="failed to get container status \"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5\": rpc error: code = NotFound desc = could not find container \"a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5\": container with ID starting with a3261857a975d8ba13b382b3c93311ea52ddb25065b1874aaa59d00eb75e61a5 not found: ID does not exist" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.326047 4766 scope.go:117] "RemoveContainer" containerID="07c7e43f4c233bc15a95251cad07a884a33a05f78743bb5a3c6f01f63b880784" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.345928 4766 scope.go:117] "RemoveContainer" containerID="35d741477652fd2fdab85e5a190f27cf16637cca6d3186932dfe4f9ff8c8c1c1" Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.420281 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:50:10 crc kubenswrapper[4766]: I0129 11:50:10.425958 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.118801 4766 generic.go:334] "Generic (PLEG): container finished" podID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerID="1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970" exitCode=0 Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.118890 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerDied","Data":"1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970"} Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.122728 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d49f9cb5-5nhnk" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.122735 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.160721 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.169707 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.187681 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.193469 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8d49f9cb5-5nhnk"] Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.235214 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15805cd2-3301-4e59-8c66-adde53408809" path="/var/lib/kubelet/pods/15805cd2-3301-4e59-8c66-adde53408809/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.237813 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" path="/var/lib/kubelet/pods/6bb21068-54ea-4e08-b03e-5186a35d7a09/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.238653 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" path="/var/lib/kubelet/pods/7245aebe-fe32-42fc-a489-c38b15bb4308/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.240097 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" path="/var/lib/kubelet/pods/ace2f6ec-cf57-4742-82e9-e13fd230bb69/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.240849 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" path="/var/lib/kubelet/pods/dd5d6aa7-be8d-4439-a4d3-70272705cc2f/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.242087 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" path="/var/lib/kubelet/pods/e238ce2e-9a21-43c5-94c2-0a31ab078c79/volumes" Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.264100 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.264186 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.265593 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.267751 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.267806 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.268304 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.270061 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:11 crc kubenswrapper[4766]: E0129 11:50:11.270199 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.536318 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628464 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chlbf\" (UniqueName: \"kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628625 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628690 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.628714 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts\") pod \"4f673618-4b7d-47e5-84af-092c995bca8e\" (UID: \"4f673618-4b7d-47e5-84af-092c995bca8e\") " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.629379 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.629524 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.629548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.629559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.629886 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.630040 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.630050 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f673618-4b7d-47e5-84af-092c995bca8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.630061 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f673618-4b7d-47e5-84af-092c995bca8e-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.634215 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf" (OuterVolumeSpecName: "kube-api-access-chlbf") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "kube-api-access-chlbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.640572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.651684 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.684120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "4f673618-4b7d-47e5-84af-092c995bca8e" (UID: "4f673618-4b7d-47e5-84af-092c995bca8e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.731822 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.731863 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chlbf\" (UniqueName: \"kubernetes.io/projected/4f673618-4b7d-47e5-84af-092c995bca8e-kube-api-access-chlbf\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.732117 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.732132 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f673618-4b7d-47e5-84af-092c995bca8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.759264 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 11:50:11 crc kubenswrapper[4766]: I0129 11:50:11.833935 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.134604 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f673618-4b7d-47e5-84af-092c995bca8e" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" exitCode=0 Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.134694 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerDied","Data":"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844"} Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.134755 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f673618-4b7d-47e5-84af-092c995bca8e","Type":"ContainerDied","Data":"e3f8c6bef4110ccd8ee7e99ab184ea1f1b3f275259777a50ef5d0ce2c92300f9"} Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.134780 4766 scope.go:117] "RemoveContainer" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.134789 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.138263 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerStarted","Data":"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3"} Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.167590 4766 scope.go:117] "RemoveContainer" containerID="4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.172027 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgkf2" podStartSLOduration=3.297801332 podStartE2EDuration="7.172006636s" podCreationTimestamp="2026-01-29 11:50:05 +0000 UTC" firstStartedPulling="2026-01-29 11:50:07.859857509 +0000 UTC m=+1744.972250520" lastFinishedPulling="2026-01-29 11:50:11.734062813 +0000 UTC m=+1748.846455824" observedRunningTime="2026-01-29 11:50:12.168558126 +0000 UTC m=+1749.280951137" watchObservedRunningTime="2026-01-29 11:50:12.172006636 +0000 UTC m=+1749.284399637" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.189613 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.196243 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.206916 4766 scope.go:117] "RemoveContainer" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" Jan 29 11:50:12 crc kubenswrapper[4766]: E0129 11:50:12.208741 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844\": container with ID starting with 4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844 not found: ID does not exist" containerID="4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.208787 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844"} err="failed to get container status \"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844\": rpc error: code = NotFound desc = could not find container \"4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844\": container with ID starting with 4d4badd8b305888bf4d052a10979015964644aa496c75333eb425b61d68f5844 not found: ID does not exist" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.208816 4766 scope.go:117] "RemoveContainer" containerID="4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad" Jan 29 11:50:12 crc kubenswrapper[4766]: E0129 11:50:12.209503 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad\": container with ID starting with 4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad not found: ID does not exist" containerID="4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad" Jan 29 11:50:12 crc kubenswrapper[4766]: I0129 11:50:12.209589 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad"} err="failed to get container status \"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad\": rpc error: code = NotFound desc = could not find container \"4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad\": container with ID starting with 4c443a4729e49ef410d8e51035b7cd68c76673e84fb0a8e8d6348e3b020266ad not found: ID does not exist" Jan 29 11:50:13 crc kubenswrapper[4766]: E0129 11:50:13.000093 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:13 crc kubenswrapper[4766]: E0129 11:50:13.001808 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:13 crc kubenswrapper[4766]: E0129 11:50:13.008820 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 11:50:13 crc kubenswrapper[4766]: E0129 11:50:13.008891 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="galera" Jan 29 11:50:13 crc kubenswrapper[4766]: I0129 11:50:13.234582 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" path="/var/lib/kubelet/pods/4f673618-4b7d-47e5-84af-092c995bca8e/volumes" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.177830 4766 generic.go:334] "Generic (PLEG): container finished" podID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerID="924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" exitCode=0 Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.177873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerDied","Data":"924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7"} Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.453817 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574188 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574226 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574247 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574271 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjd7l\" (UniqueName: \"kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574308 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574450 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.574539 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs\") pod \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\" (UID: \"ea239fdb-85e2-48e6-b992-42bd9f7e66c8\") " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.575275 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.575306 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.575813 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.576303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.590971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l" (OuterVolumeSpecName: "kube-api-access-bjd7l") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "kube-api-access-bjd7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.592131 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.598606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.615986 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ea239fdb-85e2-48e6-b992-42bd9f7e66c8" (UID: "ea239fdb-85e2-48e6-b992-42bd9f7e66c8"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677097 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677148 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677170 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677186 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677202 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677254 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677276 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.677293 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjd7l\" (UniqueName: \"kubernetes.io/projected/ea239fdb-85e2-48e6-b992-42bd9f7e66c8-kube-api-access-bjd7l\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.691373 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 11:50:14 crc kubenswrapper[4766]: I0129 11:50:14.778242 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.187585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ea239fdb-85e2-48e6-b992-42bd9f7e66c8","Type":"ContainerDied","Data":"9c115680c0bba07a9f9fdd83f98e5f90220c0d55e638f49633339c0ae7682697"} Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.187648 4766 scope.go:117] "RemoveContainer" containerID="924c2d970cbd759f4242c3fce696d5d00f5727e764f49338b35fa22e2a1a46c7" Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.187761 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.231536 4766 scope.go:117] "RemoveContainer" containerID="660cba457ee644f9f74e7f4d4669bb71ee8bdd88f5e73291cc114e7814b6fa5b" Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.234403 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:50:15 crc kubenswrapper[4766]: I0129 11:50:15.234538 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.263590 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.263938 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.264351 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.264382 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.268841 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.275889 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.278017 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:16 crc kubenswrapper[4766]: E0129 11:50:16.278102 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:16 crc kubenswrapper[4766]: I0129 11:50:16.348246 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:16 crc kubenswrapper[4766]: I0129 11:50:16.349471 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:16 crc kubenswrapper[4766]: I0129 11:50:16.394264 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:17 crc kubenswrapper[4766]: I0129 11:50:17.237312 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" path="/var/lib/kubelet/pods/ea239fdb-85e2-48e6-b992-42bd9f7e66c8/volumes" Jan 29 11:50:17 crc kubenswrapper[4766]: I0129 11:50:17.247791 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:17 crc kubenswrapper[4766]: I0129 11:50:17.286153 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:19 crc kubenswrapper[4766]: I0129 11:50:19.220433 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgkf2" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="registry-server" containerID="cri-o://d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3" gracePeriod=2 Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.151055 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.238372 4766 generic.go:334] "Generic (PLEG): container finished" podID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerID="d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3" exitCode=0 Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.238559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerDied","Data":"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3"} Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.238657 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkf2" event={"ID":"42003677-86a8-45ca-ab5e-5f8a029a5cf0","Type":"ContainerDied","Data":"8074e261798ad06400579298c25d46c50219abbf731dc618fdaa85993c8c9cc3"} Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.238691 4766 scope.go:117] "RemoveContainer" containerID="d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.239086 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkf2" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.259403 4766 scope.go:117] "RemoveContainer" containerID="1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.273579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities\") pod \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.273697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-972w7\" (UniqueName: \"kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7\") pod \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.273806 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content\") pod \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\" (UID: \"42003677-86a8-45ca-ab5e-5f8a029a5cf0\") " Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.275658 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities" (OuterVolumeSpecName: "utilities") pod "42003677-86a8-45ca-ab5e-5f8a029a5cf0" (UID: "42003677-86a8-45ca-ab5e-5f8a029a5cf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.284666 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7" (OuterVolumeSpecName: "kube-api-access-972w7") pod "42003677-86a8-45ca-ab5e-5f8a029a5cf0" (UID: "42003677-86a8-45ca-ab5e-5f8a029a5cf0"). InnerVolumeSpecName "kube-api-access-972w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.285870 4766 scope.go:117] "RemoveContainer" containerID="ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.328314 4766 scope.go:117] "RemoveContainer" containerID="d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3" Jan 29 11:50:20 crc kubenswrapper[4766]: E0129 11:50:20.328793 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3\": container with ID starting with d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3 not found: ID does not exist" containerID="d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.328859 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3"} err="failed to get container status \"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3\": rpc error: code = NotFound desc = could not find container \"d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3\": container with ID starting with d9d21953d60bf4b60bac1001391b1790319a40dac43f0b436361cbe30b10a9a3 not found: ID does not exist" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.328892 4766 scope.go:117] "RemoveContainer" containerID="1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970" Jan 29 11:50:20 crc kubenswrapper[4766]: E0129 11:50:20.329256 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970\": container with ID starting with 1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970 not found: ID does not exist" containerID="1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.329286 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970"} err="failed to get container status \"1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970\": rpc error: code = NotFound desc = could not find container \"1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970\": container with ID starting with 1d50465c1db9048062e97a5459578ec006d9fb79febced6ab004a67652ca1970 not found: ID does not exist" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.329302 4766 scope.go:117] "RemoveContainer" containerID="ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f" Jan 29 11:50:20 crc kubenswrapper[4766]: E0129 11:50:20.329848 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f\": container with ID starting with ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f not found: ID does not exist" containerID="ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.329887 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f"} err="failed to get container status \"ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f\": rpc error: code = NotFound desc = could not find container \"ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f\": container with ID starting with ccfb78799bb8ef876f0e5a4a2e6298e5bdff609a4f3eff0581bd7456af8c204f not found: ID does not exist" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.351982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42003677-86a8-45ca-ab5e-5f8a029a5cf0" (UID: "42003677-86a8-45ca-ab5e-5f8a029a5cf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.375461 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.375509 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-972w7\" (UniqueName: \"kubernetes.io/projected/42003677-86a8-45ca-ab5e-5f8a029a5cf0-kube-api-access-972w7\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.375520 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42003677-86a8-45ca-ab5e-5f8a029a5cf0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.576474 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:20 crc kubenswrapper[4766]: I0129 11:50:20.583603 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgkf2"] Jan 29 11:50:21 crc kubenswrapper[4766]: I0129 11:50:21.232968 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" path="/var/lib/kubelet/pods/42003677-86a8-45ca-ab5e-5f8a029a5cf0/volumes" Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.262726 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.263056 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.263330 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.263439 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.264066 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.265212 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.266879 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:21 crc kubenswrapper[4766]: E0129 11:50:21.266914 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.745453 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746008 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746019 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746030 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746036 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746045 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="probe" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746052 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="probe" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746059 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746065 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746079 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="proxy-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746085 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="proxy-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746095 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746100 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-api" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746111 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746116 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746125 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="ovn-northd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746130 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="ovn-northd" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746139 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="mysql-bootstrap" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746145 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="mysql-bootstrap" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746155 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746163 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746171 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746177 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746184 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="extract-content" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="extract-content" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746200 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="sg-core" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746205 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="sg-core" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746214 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746221 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746232 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746238 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746245 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746252 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="mysql-bootstrap" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746267 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="mysql-bootstrap" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746276 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="openstack-network-exporter" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746282 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="openstack-network-exporter" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746291 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerName="nova-cell1-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746297 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerName="nova-cell1-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="cinder-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746314 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="cinder-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746322 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="setup-container" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746327 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="setup-container" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746335 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" containerName="keystone-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746341 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" containerName="keystone-api" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746351 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerName="memcached" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746356 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerName="memcached" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746367 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerName="nova-scheduler-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746372 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerName="nova-scheduler-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746379 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-central-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746384 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-central-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746394 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746400 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746425 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746431 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746440 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="setup-container" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746446 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="setup-container" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746452 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746459 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746471 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746486 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="extract-utilities" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746492 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="extract-utilities" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-notification-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746505 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-notification-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746515 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746521 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746530 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746537 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746550 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746556 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746563 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746568 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746579 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746585 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746594 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746600 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746607 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="registry-server" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746614 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="registry-server" Jan 29 11:50:23 crc kubenswrapper[4766]: E0129 11:50:23.746624 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746629 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746776 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746792 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746803 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f673618-4b7d-47e5-84af-092c995bca8e" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746811 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace2f6ec-cf57-4742-82e9-e13fd230bb69" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746820 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-notification-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746828 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746836 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746848 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0607cc62-49d5-4a25-b4ad-636cae5d1e7e" containerName="keystone-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746857 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746866 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="sg-core" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746876 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e238ce2e-9a21-43c5-94c2-0a31ab078c79" containerName="barbican-worker-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746885 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="cinder-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746891 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5d6aa7-be8d-4439-a4d3-70272705cc2f" containerName="neutron-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746900 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746910 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="openstack-network-exporter" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746917 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1996793f-f3ca-4559-97d6-867f0d0a2b61" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746926 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d99f4d4-0dab-45de-ac76-7a0fa820c353" containerName="nova-scheduler-scheduler" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746935 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="ceilometer-central-agent" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746943 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a8c513-ef7f-49ce-a0d8-2d9351abca2a" containerName="nova-metadata-metadata" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746951 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a084e5b1-d167-4678-8ab9-af72fb1d07fd" containerName="barbican-api" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746958 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb21068-54ea-4e08-b03e-5186a35d7a09" containerName="proxy-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746967 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7245aebe-fe32-42fc-a489-c38b15bb4308" containerName="nova-cell0-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746975 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="42003677-86a8-45ca-ab5e-5f8a029a5cf0" containerName="registry-server" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746982 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-httpd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746990 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9453e394-ed9c-4d36-b200-e559e620a7f7" containerName="ovn-northd" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.746998 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747006 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea239fdb-85e2-48e6-b992-42bd9f7e66c8" containerName="galera" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00d673d-aea5-4014-8e2b-bcb78afb7606" containerName="memcached" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747021 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1ffb49-b314-4d31-94d6-de70e35d917e" containerName="glance-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747028 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="15805cd2-3301-4e59-8c66-adde53408809" containerName="barbican-keystone-listener-log" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747036 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6eae2b-18a8-4a82-95e2-4940490b1678" containerName="nova-cell1-conductor-conductor" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747043 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e7d37-60ae-4489-a69a-e4168eb87cf2" containerName="probe" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747053 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77b577e-b980-46fb-945a-a0b57e3bdc17" containerName="rabbitmq" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.747296 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6739e909-eb6b-4578-8436-fa9f24385e0a" containerName="mariadb-account-create-update" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.748071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.775363 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.876620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.876735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.876786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8qzb\" (UniqueName: \"kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.977913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8qzb\" (UniqueName: \"kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.978034 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.978098 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.978657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:23 crc kubenswrapper[4766]: I0129 11:50:23.979327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:24 crc kubenswrapper[4766]: I0129 11:50:24.002756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8qzb\" (UniqueName: \"kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb\") pod \"redhat-marketplace-flv87\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:24 crc kubenswrapper[4766]: I0129 11:50:24.075539 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:24 crc kubenswrapper[4766]: I0129 11:50:24.519716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:25 crc kubenswrapper[4766]: I0129 11:50:25.229392 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:50:25 crc kubenswrapper[4766]: E0129 11:50:25.230050 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:50:25 crc kubenswrapper[4766]: I0129 11:50:25.287421 4766 generic.go:334] "Generic (PLEG): container finished" podID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerID="c8d5cc9c2c029217a91943a9931c40d96ed1692505a7f4719be51c53ecd24f57" exitCode=0 Jan 29 11:50:25 crc kubenswrapper[4766]: I0129 11:50:25.287538 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerDied","Data":"c8d5cc9c2c029217a91943a9931c40d96ed1692505a7f4719be51c53ecd24f57"} Jan 29 11:50:25 crc kubenswrapper[4766]: I0129 11:50:25.287565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerStarted","Data":"113de0bcadd31f17f0ccc7de33789273721ec1471e4cb3272af667ac2a149f8e"} Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.263319 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.264132 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.264521 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.264560 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.264801 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.266042 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.267532 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 11:50:26 crc kubenswrapper[4766]: E0129 11:50:26.267573 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2gh2n" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:50:26 crc kubenswrapper[4766]: I0129 11:50:26.297162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerStarted","Data":"724b4a6fc729996f14f06d0ae5d48a3c7e040dc915d959a80be6f8d5a2675879"} Jan 29 11:50:27 crc kubenswrapper[4766]: I0129 11:50:27.307357 4766 generic.go:334] "Generic (PLEG): container finished" podID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerID="724b4a6fc729996f14f06d0ae5d48a3c7e040dc915d959a80be6f8d5a2675879" exitCode=0 Jan 29 11:50:27 crc kubenswrapper[4766]: I0129 11:50:27.307431 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerDied","Data":"724b4a6fc729996f14f06d0ae5d48a3c7e040dc915d959a80be6f8d5a2675879"} Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.344455 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2gh2n_be830961-a6c3-4340-a134-ea20de96b31b/ovs-vswitchd/0.log" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.346196 4766 generic.go:334] "Generic (PLEG): container finished" podID="be830961-a6c3-4340-a134-ea20de96b31b" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" exitCode=137 Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.346283 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerDied","Data":"6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca"} Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.350449 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerStarted","Data":"7b7ff34d6374987ee134dd18742dc109d735d4b36a411a960ad9120887fdeb27"} Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.377072 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-flv87" podStartSLOduration=3.277977668 podStartE2EDuration="6.37705642s" podCreationTimestamp="2026-01-29 11:50:23 +0000 UTC" firstStartedPulling="2026-01-29 11:50:25.288986233 +0000 UTC m=+1762.401379244" lastFinishedPulling="2026-01-29 11:50:28.388064985 +0000 UTC m=+1765.500457996" observedRunningTime="2026-01-29 11:50:29.369294586 +0000 UTC m=+1766.481687597" watchObservedRunningTime="2026-01-29 11:50:29.37705642 +0000 UTC m=+1766.489449431" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.599230 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2gh2n_be830961-a6c3-4340-a134-ea20de96b31b/ovs-vswitchd/0.log" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.600161 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764608 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764653 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764721 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764764 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run" (OuterVolumeSpecName: "var-run") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764818 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764816 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log" (OuterVolumeSpecName: "var-log") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764850 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib" (OuterVolumeSpecName: "var-lib") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.764906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bqtd\" (UniqueName: \"kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd\") pod \"be830961-a6c3-4340-a134-ea20de96b31b\" (UID: \"be830961-a6c3-4340-a134-ea20de96b31b\") " Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.765372 4766 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.765393 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.765425 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.765438 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be830961-a6c3-4340-a134-ea20de96b31b-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.765859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts" (OuterVolumeSpecName: "scripts") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.770111 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd" (OuterVolumeSpecName: "kube-api-access-5bqtd") pod "be830961-a6c3-4340-a134-ea20de96b31b" (UID: "be830961-a6c3-4340-a134-ea20de96b31b"). InnerVolumeSpecName "kube-api-access-5bqtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.867165 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be830961-a6c3-4340-a134-ea20de96b31b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:29 crc kubenswrapper[4766]: I0129 11:50:29.867204 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bqtd\" (UniqueName: \"kubernetes.io/projected/be830961-a6c3-4340-a134-ea20de96b31b-kube-api-access-5bqtd\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.375076 4766 generic.go:334] "Generic (PLEG): container finished" podID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerID="8e68677dd185d8414adc8711bc359046fd3ba61c227101b176907d577a947636" exitCode=137 Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.375089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"8e68677dd185d8414adc8711bc359046fd3ba61c227101b176907d577a947636"} Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.377241 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2gh2n_be830961-a6c3-4340-a134-ea20de96b31b/ovs-vswitchd/0.log" Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.378329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2gh2n" event={"ID":"be830961-a6c3-4340-a134-ea20de96b31b","Type":"ContainerDied","Data":"123ae6f0b0c8f489594c3f72a5b97a2ba7c2ca88afbbfb473c9f02131f30b28d"} Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.378379 4766 scope.go:117] "RemoveContainer" containerID="6d0c73be724cc09499410e85d8a2850f80580b59a49608c7346ae0c91c515cca" Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.378394 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2gh2n" Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.433166 4766 scope.go:117] "RemoveContainer" containerID="ada527602c2d111c8cc15b33ae428a79b9321f607d745fd8c9af26be1b1d14a2" Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.446172 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.451584 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-2gh2n"] Jan 29 11:50:30 crc kubenswrapper[4766]: I0129 11:50:30.498592 4766 scope.go:117] "RemoveContainer" containerID="e76508553331ee93028854abc43e1fdbfc214e061ac1372339c39e9dd3e3651f" Jan 29 11:50:33 crc kubenswrapper[4766]: I0129 11:50:31.234267 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be830961-a6c3-4340-a134-ea20de96b31b" path="/var/lib/kubelet/pods/be830961-a6c3-4340-a134-ea20de96b31b/volumes" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.076186 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.076543 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.131026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.206447 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.329991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjms\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330048 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330079 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330191 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330281 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle\") pod \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\" (UID: \"c299dfaa-12db-4482-ab89-55ba85b8e2a7\") " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.330906 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock" (OuterVolumeSpecName: "lock") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.332017 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache" (OuterVolumeSpecName: "cache") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.337597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.337662 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms" (OuterVolumeSpecName: "kube-api-access-chjms") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "kube-api-access-chjms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.351733 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "swift") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.417091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c299dfaa-12db-4482-ab89-55ba85b8e2a7","Type":"ContainerDied","Data":"00d2dddd84ce0b74b92d4be4bc9599cce85a26c3c8910d5387fb145c688129de"} Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.417150 4766 scope.go:117] "RemoveContainer" containerID="8e68677dd185d8414adc8711bc359046fd3ba61c227101b176907d577a947636" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.417208 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.431559 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.431588 4766 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-cache\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.431599 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjms\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-kube-api-access-chjms\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.431608 4766 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c299dfaa-12db-4482-ab89-55ba85b8e2a7-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.431617 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c299dfaa-12db-4482-ab89-55ba85b8e2a7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.446960 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.464104 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.482472 4766 scope.go:117] "RemoveContainer" containerID="b5a310208e51de3a1f1085a299d696e0c092c1ac6a305a7368d95a466bfff254" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.517940 4766 scope.go:117] "RemoveContainer" containerID="2395dfbbbded053ffa0416aaf69a1b9af00ea806ccc677235dd81f9d3e9af4d0" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.520183 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.533316 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.540275 4766 scope.go:117] "RemoveContainer" containerID="f4a7df4ad8946a4ec821983033924fd3dd8e163b9568817e4bde1fb325d0beeb" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.619789 4766 scope.go:117] "RemoveContainer" containerID="c34078361e9f1ca8e71c227ebd7d7091b558e6c3354bb51e22b1e1374342fcd1" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.647935 4766 scope.go:117] "RemoveContainer" containerID="257558dd443e4fbb0f93499c81b54107c340b1424e2baeb386f3a283efa8bdc7" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.683912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c299dfaa-12db-4482-ab89-55ba85b8e2a7" (UID: "c299dfaa-12db-4482-ab89-55ba85b8e2a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.737603 4766 scope.go:117] "RemoveContainer" containerID="d7be2c0fabfadf12060358b5738adc72343b29f57c77135d1af1a5ae1e4e2863" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.739218 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c299dfaa-12db-4482-ab89-55ba85b8e2a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.762418 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.769369 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.772053 4766 scope.go:117] "RemoveContainer" containerID="8d94f2b31596b3ca99397133e0199e33b8ac9312697c345fc4b87be8aeecd36f" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.803053 4766 scope.go:117] "RemoveContainer" containerID="30f33e794206b04a93fc0f4e715cfe43660a23a19676c6e5b3df502d2e869f1b" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.844043 4766 scope.go:117] "RemoveContainer" containerID="e0305b2958f6c65d81b49c58ff14fade2e99341839d85bcc73aa51a8cd5a3041" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.877462 4766 scope.go:117] "RemoveContainer" containerID="7c33d37f74f55ffa51cd765a4b94d2af021150d55ef7e15a523b325c621e7d0a" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.901031 4766 scope.go:117] "RemoveContainer" containerID="0025dd537da59d77d5c32f5643222b1c209187a4cb4389da45a65ec542521294" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.918577 4766 scope.go:117] "RemoveContainer" containerID="aff768bf5b19009768658ec1f0fc18767e8949cd575199e18d90c8f182040d28" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.939365 4766 scope.go:117] "RemoveContainer" containerID="7f8c5aeba92943edcfc2aff61715cdbbc5630ac266d0729c5b84d3f25837100d" Jan 29 11:50:34 crc kubenswrapper[4766]: I0129 11:50:34.967211 4766 scope.go:117] "RemoveContainer" containerID="0a43006268a6331aa5c508b013f959c36b198052b905a54a63dfcc6e786548d6" Jan 29 11:50:35 crc kubenswrapper[4766]: I0129 11:50:35.235019 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" path="/var/lib/kubelet/pods/c299dfaa-12db-4482-ab89-55ba85b8e2a7/volumes" Jan 29 11:50:36 crc kubenswrapper[4766]: I0129 11:50:36.072433 4766 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podee945927-3683-4163-ac37-83d894a9569b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podee945927-3683-4163-ac37-83d894a9569b] : Timed out while waiting for systemd to remove kubepods-besteffort-podee945927_3683_4163_ac37_83d894a9569b.slice" Jan 29 11:50:36 crc kubenswrapper[4766]: I0129 11:50:36.225049 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:50:36 crc kubenswrapper[4766]: E0129 11:50:36.225399 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:50:36 crc kubenswrapper[4766]: I0129 11:50:36.437526 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-flv87" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="registry-server" containerID="cri-o://7b7ff34d6374987ee134dd18742dc109d735d4b36a411a960ad9120887fdeb27" gracePeriod=2 Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.449949 4766 generic.go:334] "Generic (PLEG): container finished" podID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerID="7b7ff34d6374987ee134dd18742dc109d735d4b36a411a960ad9120887fdeb27" exitCode=0 Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.450027 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerDied","Data":"7b7ff34d6374987ee134dd18742dc109d735d4b36a411a960ad9120887fdeb27"} Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.450362 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flv87" event={"ID":"7530c2b9-27f4-4100-a3cc-f73b46f86712","Type":"ContainerDied","Data":"113de0bcadd31f17f0ccc7de33789273721ec1471e4cb3272af667ac2a149f8e"} Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.450376 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113de0bcadd31f17f0ccc7de33789273721ec1471e4cb3272af667ac2a149f8e" Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.454053 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.581295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities\") pod \"7530c2b9-27f4-4100-a3cc-f73b46f86712\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.581475 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content\") pod \"7530c2b9-27f4-4100-a3cc-f73b46f86712\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.581511 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8qzb\" (UniqueName: \"kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb\") pod \"7530c2b9-27f4-4100-a3cc-f73b46f86712\" (UID: \"7530c2b9-27f4-4100-a3cc-f73b46f86712\") " Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.582598 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities" (OuterVolumeSpecName: "utilities") pod "7530c2b9-27f4-4100-a3cc-f73b46f86712" (UID: "7530c2b9-27f4-4100-a3cc-f73b46f86712"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.585991 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb" (OuterVolumeSpecName: "kube-api-access-b8qzb") pod "7530c2b9-27f4-4100-a3cc-f73b46f86712" (UID: "7530c2b9-27f4-4100-a3cc-f73b46f86712"). InnerVolumeSpecName "kube-api-access-b8qzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.683361 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8qzb\" (UniqueName: \"kubernetes.io/projected/7530c2b9-27f4-4100-a3cc-f73b46f86712-kube-api-access-b8qzb\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:37 crc kubenswrapper[4766]: I0129 11:50:37.683500 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:38 crc kubenswrapper[4766]: I0129 11:50:38.430732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7530c2b9-27f4-4100-a3cc-f73b46f86712" (UID: "7530c2b9-27f4-4100-a3cc-f73b46f86712"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:50:38 crc kubenswrapper[4766]: I0129 11:50:38.458792 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flv87" Jan 29 11:50:38 crc kubenswrapper[4766]: I0129 11:50:38.494776 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7530c2b9-27f4-4100-a3cc-f73b46f86712-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:50:38 crc kubenswrapper[4766]: I0129 11:50:38.501621 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:38 crc kubenswrapper[4766]: I0129 11:50:38.509246 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-flv87"] Jan 29 11:50:39 crc kubenswrapper[4766]: I0129 11:50:39.234061 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" path="/var/lib/kubelet/pods/7530c2b9-27f4-4100-a3cc-f73b46f86712/volumes" Jan 29 11:50:50 crc kubenswrapper[4766]: I0129 11:50:50.225003 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:50:50 crc kubenswrapper[4766]: E0129 11:50:50.226153 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:51:01 crc kubenswrapper[4766]: I0129 11:51:01.225375 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:51:01 crc kubenswrapper[4766]: E0129 11:51:01.226738 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:51:10 crc kubenswrapper[4766]: I0129 11:51:10.061651 4766 scope.go:117] "RemoveContainer" containerID="f90c3671694662e6b9f1584abc9bd6ae5dd46f25e77b8df0cd377c69033dc174" Jan 29 11:51:14 crc kubenswrapper[4766]: I0129 11:51:14.224984 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:51:14 crc kubenswrapper[4766]: E0129 11:51:14.226485 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:51:25 crc kubenswrapper[4766]: I0129 11:51:25.229303 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:51:25 crc kubenswrapper[4766]: E0129 11:51:25.230111 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:51:37 crc kubenswrapper[4766]: I0129 11:51:37.224395 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:51:37 crc kubenswrapper[4766]: E0129 11:51:37.225219 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:51:50 crc kubenswrapper[4766]: I0129 11:51:50.224377 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:51:50 crc kubenswrapper[4766]: E0129 11:51:50.224995 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:52:01 crc kubenswrapper[4766]: I0129 11:52:01.224508 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:52:01 crc kubenswrapper[4766]: E0129 11:52:01.225348 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.264356 4766 scope.go:117] "RemoveContainer" containerID="b039e316a43239d08dfa9f608708018443b16533ce721c610c2d8645a7f4a4e3" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.305344 4766 scope.go:117] "RemoveContainer" containerID="fe3e8be97badf0ba50b26051af0565eb983bd7430bea4d09ce0b37e4f5910f20" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.371805 4766 scope.go:117] "RemoveContainer" containerID="1094875acb955bd1cd45b0f01cd633b759da7b9e7cfdb05fda344de26ec577fd" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.394138 4766 scope.go:117] "RemoveContainer" containerID="ca2bd9e6324bd1d1a4140bf7f5d26c398cd9fce6da66d4744127ae8f1a2b1c16" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.441814 4766 scope.go:117] "RemoveContainer" containerID="7e858737daba72926bb1c1a68da1eac711ef60cc06bf99c8cbce6410dc3a5bde" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.485830 4766 scope.go:117] "RemoveContainer" containerID="c6589f9f2c3d11e7c6da1ef88d0204322056c915176e7f812a7d04c2e1080a29" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.531676 4766 scope.go:117] "RemoveContainer" containerID="6888ef0b10075315f9d9a04b4f6de56e997d905248e9aa9f273d9e651d9ed15e" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.566570 4766 scope.go:117] "RemoveContainer" containerID="ff71015a48cd41c90b8eaa7f1da6a4da595e58f31de96caaf15023c6a05581ea" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.583172 4766 scope.go:117] "RemoveContainer" containerID="01ce65370c489b737bf027f65e90f5d3d975fcaf5a42a03b572e4c06aeb4f944" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.618835 4766 scope.go:117] "RemoveContainer" containerID="6178e29f3f97ee21ec4cf7acfdfb1b895e6b1f01bc50ed4550a76d923def4120" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.661498 4766 scope.go:117] "RemoveContainer" containerID="e773f46ba927ee115db6b04e8fb94c7b75de4344027c40028c5ce426541fa4a2" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.685177 4766 scope.go:117] "RemoveContainer" containerID="833f367854feff5ffe5e8329ce997906add38460016ec6b032fbca38e61fe6d2" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.705700 4766 scope.go:117] "RemoveContainer" containerID="1ae6efb8d7fd239f4a0fa84c1d56ee76ea87f272277e26ad7b41583d5980455a" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.725168 4766 scope.go:117] "RemoveContainer" containerID="713a52694314534b488fc1a0658f9e5b34496a8bf5bc37528ef84f27656debf8" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.742280 4766 scope.go:117] "RemoveContainer" containerID="14a2b107d79060a55cc0736ce17cf6283f9abfc277732b4f4dd73c17cc6a1369" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.767308 4766 scope.go:117] "RemoveContainer" containerID="c2a732eef5e758404b95f298f724b689625e87f72dd1bc97d1ce025f6bf657aa" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.789161 4766 scope.go:117] "RemoveContainer" containerID="ac77ae0dc0937af8cf11d98edc90f066da8b3ffdaa95852c341e7662f0d23df6" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.837747 4766 scope.go:117] "RemoveContainer" containerID="55b6f91779e615e2274808931072eec9e0ba72e68ba23e7d95cb2a545b23ea53" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.857134 4766 scope.go:117] "RemoveContainer" containerID="b7e72beaf22658c2e8cba01afcdcc85729f545e1f1f6abf942d86470e5e22369" Jan 29 11:52:10 crc kubenswrapper[4766]: I0129 11:52:10.874115 4766 scope.go:117] "RemoveContainer" containerID="8393252b4e7275a9df8ff86e89fa4bd74cece45a96b553385aa19ae061de0cca" Jan 29 11:52:13 crc kubenswrapper[4766]: I0129 11:52:13.224804 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:52:13 crc kubenswrapper[4766]: E0129 11:52:13.225535 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:52:28 crc kubenswrapper[4766]: I0129 11:52:28.225095 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:52:28 crc kubenswrapper[4766]: E0129 11:52:28.225950 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:52:41 crc kubenswrapper[4766]: I0129 11:52:41.224132 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:52:41 crc kubenswrapper[4766]: E0129 11:52:41.225030 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:52:54 crc kubenswrapper[4766]: I0129 11:52:54.225152 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:52:54 crc kubenswrapper[4766]: E0129 11:52:54.225981 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:53:09 crc kubenswrapper[4766]: I0129 11:53:09.225093 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:53:09 crc kubenswrapper[4766]: E0129 11:53:09.225988 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.209204 4766 scope.go:117] "RemoveContainer" containerID="69129b08fe5bfc552d777715d2a1eac20f74a31b1c06ebb3940050c592d7eaeb" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.235172 4766 scope.go:117] "RemoveContainer" containerID="7a100abcf625aab3d1ab04d02bb7f3a8d947c5eb7f100414cb20bc19c04d918d" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.261794 4766 scope.go:117] "RemoveContainer" containerID="e338a8fb60dfe5f32593a50ae289d0ea1611a14385380bb0080e86f210f7ecab" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.283849 4766 scope.go:117] "RemoveContainer" containerID="679c7206ac2f82b82e8b1a3ca3a64bf5f1d0710a5dba85f183e20c4390695423" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.300699 4766 scope.go:117] "RemoveContainer" containerID="d786e0405320809a96af346c35692c438ad7742c08fa3ce17122a0f104fafaa5" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.317484 4766 scope.go:117] "RemoveContainer" containerID="b2565533fc97d56cca8b0208c040bb47fdbe135ae4090f3faf34f8876d98061e" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.339072 4766 scope.go:117] "RemoveContainer" containerID="ecbdaa68778e5b026bb15de99d381a247817155af659c96860c67bd842555592" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.358526 4766 scope.go:117] "RemoveContainer" containerID="1a5abfa52d446485a846754eb67676b987aa6b3104b0f18b430343686110ea02" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.396255 4766 scope.go:117] "RemoveContainer" containerID="480ab67482035d0544f7ef590575a0908518a57e5ecf17c5349eaf4d31105da6" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.425344 4766 scope.go:117] "RemoveContainer" containerID="a5c49449e84d148200b6f0a47a8ec23b2f77e9135152810c5d0bbabc622713e8" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.446499 4766 scope.go:117] "RemoveContainer" containerID="a1b01b98a17e9068636d4080c5efde45109be4e9266579fe903c48c395a384f6" Jan 29 11:53:11 crc kubenswrapper[4766]: I0129 11:53:11.483501 4766 scope.go:117] "RemoveContainer" containerID="9982d06a3f9e319a6ac98d0397be8271cb4490d37b4f3f2be7d30bd0f946c97e" Jan 29 11:53:21 crc kubenswrapper[4766]: I0129 11:53:21.224439 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:53:22 crc kubenswrapper[4766]: I0129 11:53:22.270155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218"} Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.615061 4766 scope.go:117] "RemoveContainer" containerID="b47293f7cec9d0af51cd2d23e9b89b2afc946db075024da64ce50cf1a5082bcb" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.634745 4766 scope.go:117] "RemoveContainer" containerID="0f81dc60f6935f2c85c58f5ca30e75b0ad34b984d68a583addc429fb98cbd09d" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.671608 4766 scope.go:117] "RemoveContainer" containerID="9b375c5530a1a3775341ca3d65d9013b6d89fdf7753d546f691d72af15d5a3a6" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.699734 4766 scope.go:117] "RemoveContainer" containerID="a5870626b08c5ff65aad3d62a1002578aa41b4503406b749e77a94df8bdaa959" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.730888 4766 scope.go:117] "RemoveContainer" containerID="d7757455cbb85c897a50eea066ec215bb182c05d10d10b012cbf172f9eac52e9" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.747129 4766 scope.go:117] "RemoveContainer" containerID="975d9dec64a2fca25f52a750db0c70feb57df9e6479ecb4133299bd8f6a0e06c" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.772707 4766 scope.go:117] "RemoveContainer" containerID="a2d18f5ec3dadbff7d926f5e1d5f9e45f90f6843b5fe1c8d7b1e4834d350542d" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.791597 4766 scope.go:117] "RemoveContainer" containerID="5c4d7a8ef15ea08f1047185923173dff7aaa7691455c34c2f8cea7f984b1d2d4" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.817203 4766 scope.go:117] "RemoveContainer" containerID="e9d3c086db8be6c2238dd8bc1ca1ec8cf931703d74662e7deb56e597e993e11f" Jan 29 11:54:11 crc kubenswrapper[4766]: I0129 11:54:11.849945 4766 scope.go:117] "RemoveContainer" containerID="5f484b8e00e79b044b603b23bc146e1024f8a58609cafd703ef2e0617e674445" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.550577 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551356 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="extract-content" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551371 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="extract-content" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551387 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551393 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551431 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-server" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551438 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551445 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-server" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551456 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="extract-utilities" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551462 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="extract-utilities" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551473 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-expirer" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551480 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-expirer" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="swift-recon-cron" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551494 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="swift-recon-cron" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551505 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551511 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551520 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551537 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551556 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551563 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551576 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-reaper" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551583 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-reaper" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551594 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551600 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551612 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551618 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551627 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551634 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551645 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551652 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-server" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551661 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551667 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551675 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="rsync" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551682 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="rsync" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551694 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="registry-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551700 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="registry-server" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551711 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server-init" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551717 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server-init" Jan 29 11:54:30 crc kubenswrapper[4766]: E0129 11:54:30.551725 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551732 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551864 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-expirer" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551878 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7530c2b9-27f4-4100-a3cc-f73b46f86712" containerName="registry-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551888 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551895 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="swift-recon-cron" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551903 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551911 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551924 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551933 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551941 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-updater" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551947 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovsdb-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551956 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="be830961-a6c3-4340-a134-ea20de96b31b" containerName="ovs-vswitchd" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="object-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551974 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551983 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-server" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.551993 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-auditor" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.552001 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="account-reaper" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.552010 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="rsync" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.552020 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c299dfaa-12db-4482-ab89-55ba85b8e2a7" containerName="container-replicator" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.553215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.566655 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.608162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.608203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78dc\" (UniqueName: \"kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.608236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.710844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.710891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d78dc\" (UniqueName: \"kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.710917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.711439 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.711463 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.732373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d78dc\" (UniqueName: \"kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc\") pod \"certified-operators-tb6gk\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:30 crc kubenswrapper[4766]: I0129 11:54:30.877549 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:31 crc kubenswrapper[4766]: I0129 11:54:31.374285 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:31 crc kubenswrapper[4766]: I0129 11:54:31.811683 4766 generic.go:334] "Generic (PLEG): container finished" podID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerID="3ae1dfdc840eafc59bab899977aa84998cbb7e75823d0beebcc7ab1ddbb1d704" exitCode=0 Jan 29 11:54:31 crc kubenswrapper[4766]: I0129 11:54:31.811735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerDied","Data":"3ae1dfdc840eafc59bab899977aa84998cbb7e75823d0beebcc7ab1ddbb1d704"} Jan 29 11:54:31 crc kubenswrapper[4766]: I0129 11:54:31.811766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerStarted","Data":"627a2136942fd5976f0e2596787dc1f2691c5672db2f7a51d6b9921a3b87ee1f"} Jan 29 11:54:31 crc kubenswrapper[4766]: I0129 11:54:31.814197 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:54:34 crc kubenswrapper[4766]: I0129 11:54:34.834894 4766 generic.go:334] "Generic (PLEG): container finished" podID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerID="4da59325013ddec41c6e5845424223e86b3a6789324dfd3367d05875f27d9752" exitCode=0 Jan 29 11:54:34 crc kubenswrapper[4766]: I0129 11:54:34.835403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerDied","Data":"4da59325013ddec41c6e5845424223e86b3a6789324dfd3367d05875f27d9752"} Jan 29 11:54:35 crc kubenswrapper[4766]: I0129 11:54:35.852226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerStarted","Data":"ab64efb41921537f8627aadd6ee65385df6f5d0783d41d5a9270072a33ce6a09"} Jan 29 11:54:35 crc kubenswrapper[4766]: I0129 11:54:35.874368 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tb6gk" podStartSLOduration=2.037371179 podStartE2EDuration="5.874345352s" podCreationTimestamp="2026-01-29 11:54:30 +0000 UTC" firstStartedPulling="2026-01-29 11:54:31.81393225 +0000 UTC m=+2008.926325261" lastFinishedPulling="2026-01-29 11:54:35.650906423 +0000 UTC m=+2012.763299434" observedRunningTime="2026-01-29 11:54:35.868708981 +0000 UTC m=+2012.981102002" watchObservedRunningTime="2026-01-29 11:54:35.874345352 +0000 UTC m=+2012.986738373" Jan 29 11:54:40 crc kubenswrapper[4766]: I0129 11:54:40.879266 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:40 crc kubenswrapper[4766]: I0129 11:54:40.879632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:40 crc kubenswrapper[4766]: I0129 11:54:40.924189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:40 crc kubenswrapper[4766]: I0129 11:54:40.969331 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:41 crc kubenswrapper[4766]: I0129 11:54:41.559821 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:42 crc kubenswrapper[4766]: I0129 11:54:42.898934 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tb6gk" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="registry-server" containerID="cri-o://ab64efb41921537f8627aadd6ee65385df6f5d0783d41d5a9270072a33ce6a09" gracePeriod=2 Jan 29 11:54:43 crc kubenswrapper[4766]: I0129 11:54:43.918568 4766 generic.go:334] "Generic (PLEG): container finished" podID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerID="ab64efb41921537f8627aadd6ee65385df6f5d0783d41d5a9270072a33ce6a09" exitCode=0 Jan 29 11:54:43 crc kubenswrapper[4766]: I0129 11:54:43.918649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerDied","Data":"ab64efb41921537f8627aadd6ee65385df6f5d0783d41d5a9270072a33ce6a09"} Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.407882 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.509065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities\") pod \"b235d338-5213-46d8-8e8e-d9f31aa241c1\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.509146 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content\") pod \"b235d338-5213-46d8-8e8e-d9f31aa241c1\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.509232 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d78dc\" (UniqueName: \"kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc\") pod \"b235d338-5213-46d8-8e8e-d9f31aa241c1\" (UID: \"b235d338-5213-46d8-8e8e-d9f31aa241c1\") " Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.511371 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities" (OuterVolumeSpecName: "utilities") pod "b235d338-5213-46d8-8e8e-d9f31aa241c1" (UID: "b235d338-5213-46d8-8e8e-d9f31aa241c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.514551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc" (OuterVolumeSpecName: "kube-api-access-d78dc") pod "b235d338-5213-46d8-8e8e-d9f31aa241c1" (UID: "b235d338-5213-46d8-8e8e-d9f31aa241c1"). InnerVolumeSpecName "kube-api-access-d78dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.562804 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b235d338-5213-46d8-8e8e-d9f31aa241c1" (UID: "b235d338-5213-46d8-8e8e-d9f31aa241c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.610632 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.610667 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b235d338-5213-46d8-8e8e-d9f31aa241c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.610678 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d78dc\" (UniqueName: \"kubernetes.io/projected/b235d338-5213-46d8-8e8e-d9f31aa241c1-kube-api-access-d78dc\") on node \"crc\" DevicePath \"\"" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.931283 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb6gk" event={"ID":"b235d338-5213-46d8-8e8e-d9f31aa241c1","Type":"ContainerDied","Data":"627a2136942fd5976f0e2596787dc1f2691c5672db2f7a51d6b9921a3b87ee1f"} Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.931875 4766 scope.go:117] "RemoveContainer" containerID="ab64efb41921537f8627aadd6ee65385df6f5d0783d41d5a9270072a33ce6a09" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.932284 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb6gk" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.950084 4766 scope.go:117] "RemoveContainer" containerID="4da59325013ddec41c6e5845424223e86b3a6789324dfd3367d05875f27d9752" Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.968466 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.973737 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tb6gk"] Jan 29 11:54:44 crc kubenswrapper[4766]: I0129 11:54:44.985919 4766 scope.go:117] "RemoveContainer" containerID="3ae1dfdc840eafc59bab899977aa84998cbb7e75823d0beebcc7ab1ddbb1d704" Jan 29 11:54:45 crc kubenswrapper[4766]: I0129 11:54:45.234502 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" path="/var/lib/kubelet/pods/b235d338-5213-46d8-8e8e-d9f31aa241c1/volumes" Jan 29 11:55:11 crc kubenswrapper[4766]: I0129 11:55:11.966860 4766 scope.go:117] "RemoveContainer" containerID="bb85acb54b614f03184470afb4304b466f98b4b04565bd127f8ffdb642c19047" Jan 29 11:55:12 crc kubenswrapper[4766]: I0129 11:55:12.012146 4766 scope.go:117] "RemoveContainer" containerID="657c679c3282f0775c066abcb6cd841de1f904e5d86da1996d18f252a03f6653" Jan 29 11:55:12 crc kubenswrapper[4766]: I0129 11:55:12.045776 4766 scope.go:117] "RemoveContainer" containerID="a705a413907ef327b1f4edf40f66d732da2bb972954ab9c5d54ac904827256ea" Jan 29 11:55:46 crc kubenswrapper[4766]: I0129 11:55:46.361853 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:55:46 crc kubenswrapper[4766]: I0129 11:55:46.362442 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:56:16 crc kubenswrapper[4766]: I0129 11:56:16.362005 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:56:16 crc kubenswrapper[4766]: I0129 11:56:16.363869 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.361918 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.362525 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.362572 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.363200 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.363257 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218" gracePeriod=600 Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.817163 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218" exitCode=0 Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.817224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218"} Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.817313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191"} Jan 29 11:56:46 crc kubenswrapper[4766]: I0129 11:56:46.817355 4766 scope.go:117] "RemoveContainer" containerID="0533d3cd201d8df748a679808785afe81ac4f6800edece4327e69cb5f8cce31d" Jan 29 11:57:12 crc kubenswrapper[4766]: I0129 11:57:12.161553 4766 scope.go:117] "RemoveContainer" containerID="724b4a6fc729996f14f06d0ae5d48a3c7e040dc915d959a80be6f8d5a2675879" Jan 29 11:57:12 crc kubenswrapper[4766]: I0129 11:57:12.192652 4766 scope.go:117] "RemoveContainer" containerID="7b7ff34d6374987ee134dd18742dc109d735d4b36a411a960ad9120887fdeb27" Jan 29 11:57:12 crc kubenswrapper[4766]: I0129 11:57:12.223548 4766 scope.go:117] "RemoveContainer" containerID="c8d5cc9c2c029217a91943a9931c40d96ed1692505a7f4719be51c53ecd24f57" Jan 29 11:58:46 crc kubenswrapper[4766]: I0129 11:58:46.361948 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:58:46 crc kubenswrapper[4766]: I0129 11:58:46.363268 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:16 crc kubenswrapper[4766]: I0129 11:59:16.361875 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:16 crc kubenswrapper[4766]: I0129 11:59:16.362377 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:46 crc kubenswrapper[4766]: I0129 11:59:46.362302 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:46 crc kubenswrapper[4766]: I0129 11:59:46.362973 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:46 crc kubenswrapper[4766]: I0129 11:59:46.363031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 11:59:46 crc kubenswrapper[4766]: I0129 11:59:46.363867 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:59:46 crc kubenswrapper[4766]: I0129 11:59:46.363924 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" gracePeriod=600 Jan 29 11:59:47 crc kubenswrapper[4766]: E0129 11:59:47.020712 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:59:47 crc kubenswrapper[4766]: I0129 11:59:47.142504 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" exitCode=0 Jan 29 11:59:47 crc kubenswrapper[4766]: I0129 11:59:47.142552 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191"} Jan 29 11:59:47 crc kubenswrapper[4766]: I0129 11:59:47.142607 4766 scope.go:117] "RemoveContainer" containerID="f76b3ff74744da666ed65ffb5d07b936b08f79a440db15c1a072c4967b696218" Jan 29 11:59:47 crc kubenswrapper[4766]: I0129 11:59:47.144057 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 11:59:47 crc kubenswrapper[4766]: E0129 11:59:47.144658 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.197722 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 11:59:54 crc kubenswrapper[4766]: E0129 11:59:54.198527 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="extract-utilities" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.198539 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="extract-utilities" Jan 29 11:59:54 crc kubenswrapper[4766]: E0129 11:59:54.198549 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="registry-server" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.198555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="registry-server" Jan 29 11:59:54 crc kubenswrapper[4766]: E0129 11:59:54.198568 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="extract-content" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.198575 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="extract-content" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.198730 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b235d338-5213-46d8-8e8e-d9f31aa241c1" containerName="registry-server" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.200392 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.212898 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.301392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.301459 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.301500 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbpp\" (UniqueName: \"kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.425789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbpp\" (UniqueName: \"kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.426032 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.426069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.426659 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.426784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.448311 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbpp\" (UniqueName: \"kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp\") pod \"redhat-operators-lhg64\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.522318 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 11:59:54 crc kubenswrapper[4766]: I0129 11:59:54.963309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 11:59:55 crc kubenswrapper[4766]: I0129 11:59:55.201231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerStarted","Data":"93033ec9b62e30139d81df78be7a52eb4d68094b14340df5b8051843b5dd182c"} Jan 29 11:59:56 crc kubenswrapper[4766]: I0129 11:59:56.209589 4766 generic.go:334] "Generic (PLEG): container finished" podID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerID="30791513c6f300766ccdb0f5fb7bb397a052c6f275ad885d32a04b5d266eab9d" exitCode=0 Jan 29 11:59:56 crc kubenswrapper[4766]: I0129 11:59:56.209664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerDied","Data":"30791513c6f300766ccdb0f5fb7bb397a052c6f275ad885d32a04b5d266eab9d"} Jan 29 11:59:56 crc kubenswrapper[4766]: I0129 11:59:56.213316 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:59:59 crc kubenswrapper[4766]: I0129 11:59:59.224571 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 11:59:59 crc kubenswrapper[4766]: E0129 11:59:59.225130 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 11:59:59 crc kubenswrapper[4766]: I0129 11:59:59.231148 4766 generic.go:334] "Generic (PLEG): container finished" podID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerID="e7c421826453bf798985a26472a8ec87fc609cbec62a98ab5f5cbc42cf4e7ee3" exitCode=0 Jan 29 11:59:59 crc kubenswrapper[4766]: I0129 11:59:59.235838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerDied","Data":"e7c421826453bf798985a26472a8ec87fc609cbec62a98ab5f5cbc42cf4e7ee3"} Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.159148 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n"] Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.161540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.164068 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.164524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.176706 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n"] Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.246287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerStarted","Data":"55058456ed554901338b39f2b7108c3d22cd37f8c7d02849984fb1ba7cf47e1d"} Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.269303 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhg64" podStartSLOduration=2.531382775 podStartE2EDuration="6.269284728s" podCreationTimestamp="2026-01-29 11:59:54 +0000 UTC" firstStartedPulling="2026-01-29 11:59:56.2130499 +0000 UTC m=+2333.325442911" lastFinishedPulling="2026-01-29 11:59:59.950951843 +0000 UTC m=+2337.063344864" observedRunningTime="2026-01-29 12:00:00.262796132 +0000 UTC m=+2337.375189163" watchObservedRunningTime="2026-01-29 12:00:00.269284728 +0000 UTC m=+2337.381677739" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.318778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.318836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.319151 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m75g\" (UniqueName: \"kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.420360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m75g\" (UniqueName: \"kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.420539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.420567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.421546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.426358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.439109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m75g\" (UniqueName: \"kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g\") pod \"collect-profiles-29494800-fx79n\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.538567 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:00 crc kubenswrapper[4766]: I0129 12:00:00.997712 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n"] Jan 29 12:00:01 crc kubenswrapper[4766]: I0129 12:00:01.254844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" event={"ID":"6ba336c4-3c4d-4d81-90c0-2c9eb0870345","Type":"ContainerStarted","Data":"b5a86b910823dd5dd7bf9601526620b7480da0dd8c621e1013fc6ad4fe17903e"} Jan 29 12:00:02 crc kubenswrapper[4766]: I0129 12:00:02.266038 4766 generic.go:334] "Generic (PLEG): container finished" podID="6ba336c4-3c4d-4d81-90c0-2c9eb0870345" containerID="3261d56ea0481f155efd1d4b6f19b721830e96ffcb51a4c93c0203e0213e4dfc" exitCode=0 Jan 29 12:00:02 crc kubenswrapper[4766]: I0129 12:00:02.266108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" event={"ID":"6ba336c4-3c4d-4d81-90c0-2c9eb0870345","Type":"ContainerDied","Data":"3261d56ea0481f155efd1d4b6f19b721830e96ffcb51a4c93c0203e0213e4dfc"} Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.585879 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.769657 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume\") pod \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.769861 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume\") pod \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.769935 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m75g\" (UniqueName: \"kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g\") pod \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\" (UID: \"6ba336c4-3c4d-4d81-90c0-2c9eb0870345\") " Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.770432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ba336c4-3c4d-4d81-90c0-2c9eb0870345" (UID: "6ba336c4-3c4d-4d81-90c0-2c9eb0870345"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.777863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g" (OuterVolumeSpecName: "kube-api-access-6m75g") pod "6ba336c4-3c4d-4d81-90c0-2c9eb0870345" (UID: "6ba336c4-3c4d-4d81-90c0-2c9eb0870345"). InnerVolumeSpecName "kube-api-access-6m75g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.777959 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ba336c4-3c4d-4d81-90c0-2c9eb0870345" (UID: "6ba336c4-3c4d-4d81-90c0-2c9eb0870345"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.871618 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.871670 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m75g\" (UniqueName: \"kubernetes.io/projected/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-kube-api-access-6m75g\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:03 crc kubenswrapper[4766]: I0129 12:00:03.871687 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ba336c4-3c4d-4d81-90c0-2c9eb0870345-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.280556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" event={"ID":"6ba336c4-3c4d-4d81-90c0-2c9eb0870345","Type":"ContainerDied","Data":"b5a86b910823dd5dd7bf9601526620b7480da0dd8c621e1013fc6ad4fe17903e"} Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.280859 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a86b910823dd5dd7bf9601526620b7480da0dd8c621e1013fc6ad4fe17903e" Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.280625 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-fx79n" Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.523647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.523697 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.673274 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9"] Jan 29 12:00:04 crc kubenswrapper[4766]: I0129 12:00:04.679706 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-ff4r9"] Jan 29 12:00:05 crc kubenswrapper[4766]: I0129 12:00:05.235357 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfb993e-e305-4ad1-81f6-349bc2544e60" path="/var/lib/kubelet/pods/3cfb993e-e305-4ad1-81f6-349bc2544e60/volumes" Jan 29 12:00:05 crc kubenswrapper[4766]: I0129 12:00:05.572022 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhg64" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="registry-server" probeResult="failure" output=< Jan 29 12:00:05 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 29 12:00:05 crc kubenswrapper[4766]: > Jan 29 12:00:12 crc kubenswrapper[4766]: I0129 12:00:12.224581 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:00:12 crc kubenswrapper[4766]: E0129 12:00:12.225242 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:00:12 crc kubenswrapper[4766]: I0129 12:00:12.299943 4766 scope.go:117] "RemoveContainer" containerID="c94fa72e9e11ff303d0e43ca27cb9b3db4a372d5279771b7dce50783145d6354" Jan 29 12:00:14 crc kubenswrapper[4766]: I0129 12:00:14.565938 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:14 crc kubenswrapper[4766]: I0129 12:00:14.610729 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:14 crc kubenswrapper[4766]: I0129 12:00:14.806943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 12:00:16 crc kubenswrapper[4766]: I0129 12:00:16.379109 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhg64" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="registry-server" containerID="cri-o://55058456ed554901338b39f2b7108c3d22cd37f8c7d02849984fb1ba7cf47e1d" gracePeriod=2 Jan 29 12:00:17 crc kubenswrapper[4766]: I0129 12:00:17.389155 4766 generic.go:334] "Generic (PLEG): container finished" podID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerID="55058456ed554901338b39f2b7108c3d22cd37f8c7d02849984fb1ba7cf47e1d" exitCode=0 Jan 29 12:00:17 crc kubenswrapper[4766]: I0129 12:00:17.389293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerDied","Data":"55058456ed554901338b39f2b7108c3d22cd37f8c7d02849984fb1ba7cf47e1d"} Jan 29 12:00:17 crc kubenswrapper[4766]: I0129 12:00:17.919149 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.070985 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content\") pod \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.071104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbpp\" (UniqueName: \"kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp\") pod \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.071147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities\") pod \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\" (UID: \"a09d1f27-90f9-4f9b-8768-1d1407a611e4\") " Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.072236 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities" (OuterVolumeSpecName: "utilities") pod "a09d1f27-90f9-4f9b-8768-1d1407a611e4" (UID: "a09d1f27-90f9-4f9b-8768-1d1407a611e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.078941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp" (OuterVolumeSpecName: "kube-api-access-sqbpp") pod "a09d1f27-90f9-4f9b-8768-1d1407a611e4" (UID: "a09d1f27-90f9-4f9b-8768-1d1407a611e4"). InnerVolumeSpecName "kube-api-access-sqbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.172799 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqbpp\" (UniqueName: \"kubernetes.io/projected/a09d1f27-90f9-4f9b-8768-1d1407a611e4-kube-api-access-sqbpp\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.172839 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.215366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a09d1f27-90f9-4f9b-8768-1d1407a611e4" (UID: "a09d1f27-90f9-4f9b-8768-1d1407a611e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.274175 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09d1f27-90f9-4f9b-8768-1d1407a611e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.398817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhg64" event={"ID":"a09d1f27-90f9-4f9b-8768-1d1407a611e4","Type":"ContainerDied","Data":"93033ec9b62e30139d81df78be7a52eb4d68094b14340df5b8051843b5dd182c"} Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.398867 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhg64" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.398879 4766 scope.go:117] "RemoveContainer" containerID="55058456ed554901338b39f2b7108c3d22cd37f8c7d02849984fb1ba7cf47e1d" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.417231 4766 scope.go:117] "RemoveContainer" containerID="e7c421826453bf798985a26472a8ec87fc609cbec62a98ab5f5cbc42cf4e7ee3" Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.432665 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.438532 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhg64"] Jan 29 12:00:18 crc kubenswrapper[4766]: I0129 12:00:18.463180 4766 scope.go:117] "RemoveContainer" containerID="30791513c6f300766ccdb0f5fb7bb397a052c6f275ad885d32a04b5d266eab9d" Jan 29 12:00:19 crc kubenswrapper[4766]: I0129 12:00:19.234494 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" path="/var/lib/kubelet/pods/a09d1f27-90f9-4f9b-8768-1d1407a611e4/volumes" Jan 29 12:00:24 crc kubenswrapper[4766]: I0129 12:00:24.224923 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:00:24 crc kubenswrapper[4766]: E0129 12:00:24.225460 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:00:36 crc kubenswrapper[4766]: I0129 12:00:36.224647 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:00:36 crc kubenswrapper[4766]: E0129 12:00:36.225436 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:00:48 crc kubenswrapper[4766]: I0129 12:00:48.224080 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:00:48 crc kubenswrapper[4766]: E0129 12:00:48.224868 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.029725 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:00:49 crc kubenswrapper[4766]: E0129 12:00:49.030406 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="extract-utilities" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030442 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="extract-utilities" Jan 29 12:00:49 crc kubenswrapper[4766]: E0129 12:00:49.030456 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba336c4-3c4d-4d81-90c0-2c9eb0870345" containerName="collect-profiles" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030465 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba336c4-3c4d-4d81-90c0-2c9eb0870345" containerName="collect-profiles" Jan 29 12:00:49 crc kubenswrapper[4766]: E0129 12:00:49.030479 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="registry-server" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030487 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="registry-server" Jan 29 12:00:49 crc kubenswrapper[4766]: E0129 12:00:49.030504 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="extract-content" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030511 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="extract-content" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030713 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09d1f27-90f9-4f9b-8768-1d1407a611e4" containerName="registry-server" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.030741 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba336c4-3c4d-4d81-90c0-2c9eb0870345" containerName="collect-profiles" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.031941 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.039010 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.201824 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.201938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66pp\" (UniqueName: \"kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.202030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.303621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.303683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.303747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66pp\" (UniqueName: \"kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.304336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.304627 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.346841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66pp\" (UniqueName: \"kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp\") pod \"community-operators-2p9s8\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.350941 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:49 crc kubenswrapper[4766]: I0129 12:00:49.872002 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:00:50 crc kubenswrapper[4766]: I0129 12:00:50.616193 4766 generic.go:334] "Generic (PLEG): container finished" podID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerID="3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0" exitCode=0 Jan 29 12:00:50 crc kubenswrapper[4766]: I0129 12:00:50.616403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerDied","Data":"3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0"} Jan 29 12:00:50 crc kubenswrapper[4766]: I0129 12:00:50.616522 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerStarted","Data":"74c02a6abef4a49341f1f8b6d915a89876e0e9c287d6e47d55b00efe7b5fd339"} Jan 29 12:00:54 crc kubenswrapper[4766]: I0129 12:00:54.651353 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerStarted","Data":"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86"} Jan 29 12:00:55 crc kubenswrapper[4766]: I0129 12:00:55.658310 4766 generic.go:334] "Generic (PLEG): container finished" podID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerID="3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86" exitCode=0 Jan 29 12:00:55 crc kubenswrapper[4766]: I0129 12:00:55.658372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerDied","Data":"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86"} Jan 29 12:00:56 crc kubenswrapper[4766]: I0129 12:00:56.669995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerStarted","Data":"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f"} Jan 29 12:00:56 crc kubenswrapper[4766]: I0129 12:00:56.704324 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2p9s8" podStartSLOduration=1.994867436 podStartE2EDuration="7.704292615s" podCreationTimestamp="2026-01-29 12:00:49 +0000 UTC" firstStartedPulling="2026-01-29 12:00:50.618295932 +0000 UTC m=+2387.730688943" lastFinishedPulling="2026-01-29 12:00:56.327721111 +0000 UTC m=+2393.440114122" observedRunningTime="2026-01-29 12:00:56.699300511 +0000 UTC m=+2393.811693522" watchObservedRunningTime="2026-01-29 12:00:56.704292615 +0000 UTC m=+2393.816685626" Jan 29 12:00:59 crc kubenswrapper[4766]: I0129 12:00:59.224979 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:00:59 crc kubenswrapper[4766]: E0129 12:00:59.225380 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:00:59 crc kubenswrapper[4766]: I0129 12:00:59.351523 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:59 crc kubenswrapper[4766]: I0129 12:00:59.351885 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:00:59 crc kubenswrapper[4766]: I0129 12:00:59.396894 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:01:09 crc kubenswrapper[4766]: I0129 12:01:09.391671 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:01:09 crc kubenswrapper[4766]: I0129 12:01:09.437225 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:01:09 crc kubenswrapper[4766]: I0129 12:01:09.766185 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2p9s8" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="registry-server" containerID="cri-o://06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f" gracePeriod=2 Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.714683 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.774705 4766 generic.go:334] "Generic (PLEG): container finished" podID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerID="06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f" exitCode=0 Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.774758 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2p9s8" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.774752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerDied","Data":"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f"} Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.774825 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2p9s8" event={"ID":"ab6267fd-b4e3-46b8-a6e8-d607b46275a4","Type":"ContainerDied","Data":"74c02a6abef4a49341f1f8b6d915a89876e0e9c287d6e47d55b00efe7b5fd339"} Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.774854 4766 scope.go:117] "RemoveContainer" containerID="06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.798361 4766 scope.go:117] "RemoveContainer" containerID="3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.808385 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content\") pod \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.808484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f66pp\" (UniqueName: \"kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp\") pod \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.808567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities\") pod \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\" (UID: \"ab6267fd-b4e3-46b8-a6e8-d607b46275a4\") " Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.809690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities" (OuterVolumeSpecName: "utilities") pod "ab6267fd-b4e3-46b8-a6e8-d607b46275a4" (UID: "ab6267fd-b4e3-46b8-a6e8-d607b46275a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.814897 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp" (OuterVolumeSpecName: "kube-api-access-f66pp") pod "ab6267fd-b4e3-46b8-a6e8-d607b46275a4" (UID: "ab6267fd-b4e3-46b8-a6e8-d607b46275a4"). InnerVolumeSpecName "kube-api-access-f66pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.818479 4766 scope.go:117] "RemoveContainer" containerID="3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.865291 4766 scope.go:117] "RemoveContainer" containerID="06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f" Jan 29 12:01:10 crc kubenswrapper[4766]: E0129 12:01:10.865826 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f\": container with ID starting with 06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f not found: ID does not exist" containerID="06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.865868 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f"} err="failed to get container status \"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f\": rpc error: code = NotFound desc = could not find container \"06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f\": container with ID starting with 06d117e5b39382498defb319cf2702dc060f73a57b362377e230f6b2dffde89f not found: ID does not exist" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.865896 4766 scope.go:117] "RemoveContainer" containerID="3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86" Jan 29 12:01:10 crc kubenswrapper[4766]: E0129 12:01:10.866265 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86\": container with ID starting with 3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86 not found: ID does not exist" containerID="3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.866305 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86"} err="failed to get container status \"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86\": rpc error: code = NotFound desc = could not find container \"3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86\": container with ID starting with 3c7093256a9b568580cc428250ca3440c3d0360d702f7f62c0fe4aa89c067b86 not found: ID does not exist" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.866326 4766 scope.go:117] "RemoveContainer" containerID="3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0" Jan 29 12:01:10 crc kubenswrapper[4766]: E0129 12:01:10.866602 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0\": container with ID starting with 3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0 not found: ID does not exist" containerID="3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.866643 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0"} err="failed to get container status \"3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0\": rpc error: code = NotFound desc = could not find container \"3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0\": container with ID starting with 3324b4b0d058db354aff10877af8b4dbac5220174ba52d9d6f768a79c65145e0 not found: ID does not exist" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.870768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab6267fd-b4e3-46b8-a6e8-d607b46275a4" (UID: "ab6267fd-b4e3-46b8-a6e8-d607b46275a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.909893 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.909940 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4766]: I0129 12:01:10.909954 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f66pp\" (UniqueName: \"kubernetes.io/projected/ab6267fd-b4e3-46b8-a6e8-d607b46275a4-kube-api-access-f66pp\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:11 crc kubenswrapper[4766]: I0129 12:01:11.110638 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:01:11 crc kubenswrapper[4766]: I0129 12:01:11.117635 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2p9s8"] Jan 29 12:01:11 crc kubenswrapper[4766]: I0129 12:01:11.233154 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" path="/var/lib/kubelet/pods/ab6267fd-b4e3-46b8-a6e8-d607b46275a4/volumes" Jan 29 12:01:13 crc kubenswrapper[4766]: I0129 12:01:13.224730 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:01:13 crc kubenswrapper[4766]: E0129 12:01:13.225155 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:01:25 crc kubenswrapper[4766]: I0129 12:01:25.230754 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:01:25 crc kubenswrapper[4766]: E0129 12:01:25.231681 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:01:40 crc kubenswrapper[4766]: I0129 12:01:40.224281 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:01:40 crc kubenswrapper[4766]: E0129 12:01:40.225487 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:01:53 crc kubenswrapper[4766]: I0129 12:01:53.226119 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:01:53 crc kubenswrapper[4766]: E0129 12:01:53.226929 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:02:05 crc kubenswrapper[4766]: I0129 12:02:05.228195 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:02:05 crc kubenswrapper[4766]: E0129 12:02:05.229920 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:02:17 crc kubenswrapper[4766]: I0129 12:02:17.225570 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:02:17 crc kubenswrapper[4766]: E0129 12:02:17.226963 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:02:29 crc kubenswrapper[4766]: I0129 12:02:29.224516 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:02:29 crc kubenswrapper[4766]: E0129 12:02:29.225240 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:02:43 crc kubenswrapper[4766]: I0129 12:02:43.225287 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:02:43 crc kubenswrapper[4766]: E0129 12:02:43.226204 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:02:54 crc kubenswrapper[4766]: I0129 12:02:54.225533 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:02:54 crc kubenswrapper[4766]: E0129 12:02:54.226475 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:03:05 crc kubenswrapper[4766]: I0129 12:03:05.236396 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:03:05 crc kubenswrapper[4766]: E0129 12:03:05.237628 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:03:20 crc kubenswrapper[4766]: I0129 12:03:20.224830 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:03:20 crc kubenswrapper[4766]: E0129 12:03:20.225576 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:03:34 crc kubenswrapper[4766]: I0129 12:03:34.225066 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:03:34 crc kubenswrapper[4766]: E0129 12:03:34.226196 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:03:47 crc kubenswrapper[4766]: I0129 12:03:47.224813 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:03:47 crc kubenswrapper[4766]: E0129 12:03:47.226335 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:03:59 crc kubenswrapper[4766]: I0129 12:03:59.224690 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:03:59 crc kubenswrapper[4766]: E0129 12:03:59.225436 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:04:12 crc kubenswrapper[4766]: I0129 12:04:12.224439 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:04:12 crc kubenswrapper[4766]: E0129 12:04:12.225200 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:04:24 crc kubenswrapper[4766]: I0129 12:04:24.224539 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:04:24 crc kubenswrapper[4766]: E0129 12:04:24.225404 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:04:35 crc kubenswrapper[4766]: I0129 12:04:35.229218 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:04:35 crc kubenswrapper[4766]: E0129 12:04:35.230061 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:04:48 crc kubenswrapper[4766]: I0129 12:04:48.224340 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:04:48 crc kubenswrapper[4766]: I0129 12:04:48.769837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae"} Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.381942 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:10 crc kubenswrapper[4766]: E0129 12:07:10.383493 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="registry-server" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.383512 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="registry-server" Jan 29 12:07:10 crc kubenswrapper[4766]: E0129 12:07:10.383535 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="extract-content" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.383545 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="extract-content" Jan 29 12:07:10 crc kubenswrapper[4766]: E0129 12:07:10.383577 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="extract-utilities" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.383586 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="extract-utilities" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.383756 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6267fd-b4e3-46b8-a6e8-d607b46275a4" containerName="registry-server" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.385170 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.392239 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.542195 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.542398 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2qf\" (UniqueName: \"kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.542642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.644319 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.644444 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2qf\" (UniqueName: \"kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.644516 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.645253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.645455 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.670109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2qf\" (UniqueName: \"kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf\") pod \"certified-operators-bddxn\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:10 crc kubenswrapper[4766]: I0129 12:07:10.712563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.220459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:11 crc kubenswrapper[4766]: W0129 12:07:11.234617 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b66950_6e3a_470a_b729_0591841a404b.slice/crio-9bd06fd5a6fc9eeb48162b1e5a29be148a8251aaec331b73618f25a12e620cb8 WatchSource:0}: Error finding container 9bd06fd5a6fc9eeb48162b1e5a29be148a8251aaec331b73618f25a12e620cb8: Status 404 returned error can't find the container with id 9bd06fd5a6fc9eeb48162b1e5a29be148a8251aaec331b73618f25a12e620cb8 Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.251318 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerStarted","Data":"9bd06fd5a6fc9eeb48162b1e5a29be148a8251aaec331b73618f25a12e620cb8"} Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.357492 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.359326 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.374943 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.460002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fvpv\" (UniqueName: \"kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.460289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.460389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.561834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fvpv\" (UniqueName: \"kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.562191 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.562389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.562736 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.562791 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.584039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fvpv\" (UniqueName: \"kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv\") pod \"redhat-marketplace-cwv9f\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:11 crc kubenswrapper[4766]: I0129 12:07:11.695167 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:12 crc kubenswrapper[4766]: I0129 12:07:12.145078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:12 crc kubenswrapper[4766]: W0129 12:07:12.147380 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5cc63b_b4ad_42e5_b920_8fcec7abfa20.slice/crio-5365cec0957cf4ff28f73c1b687d045fc6613bdff1094d17ef0e27f098c60f3d WatchSource:0}: Error finding container 5365cec0957cf4ff28f73c1b687d045fc6613bdff1094d17ef0e27f098c60f3d: Status 404 returned error can't find the container with id 5365cec0957cf4ff28f73c1b687d045fc6613bdff1094d17ef0e27f098c60f3d Jan 29 12:07:12 crc kubenswrapper[4766]: I0129 12:07:12.260037 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerStarted","Data":"5365cec0957cf4ff28f73c1b687d045fc6613bdff1094d17ef0e27f098c60f3d"} Jan 29 12:07:12 crc kubenswrapper[4766]: I0129 12:07:12.262182 4766 generic.go:334] "Generic (PLEG): container finished" podID="65b66950-6e3a-470a-b729-0591841a404b" containerID="bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc" exitCode=0 Jan 29 12:07:12 crc kubenswrapper[4766]: I0129 12:07:12.262248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerDied","Data":"bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc"} Jan 29 12:07:12 crc kubenswrapper[4766]: I0129 12:07:12.264102 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:07:13 crc kubenswrapper[4766]: I0129 12:07:13.270522 4766 generic.go:334] "Generic (PLEG): container finished" podID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerID="0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96" exitCode=0 Jan 29 12:07:13 crc kubenswrapper[4766]: I0129 12:07:13.270612 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerDied","Data":"0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96"} Jan 29 12:07:13 crc kubenswrapper[4766]: I0129 12:07:13.273254 4766 generic.go:334] "Generic (PLEG): container finished" podID="65b66950-6e3a-470a-b729-0591841a404b" containerID="eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b" exitCode=0 Jan 29 12:07:13 crc kubenswrapper[4766]: I0129 12:07:13.273280 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerDied","Data":"eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b"} Jan 29 12:07:14 crc kubenswrapper[4766]: I0129 12:07:14.284291 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerStarted","Data":"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839"} Jan 29 12:07:14 crc kubenswrapper[4766]: I0129 12:07:14.286837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerStarted","Data":"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26"} Jan 29 12:07:14 crc kubenswrapper[4766]: I0129 12:07:14.329109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bddxn" podStartSLOduration=2.929842949 podStartE2EDuration="4.32909053s" podCreationTimestamp="2026-01-29 12:07:10 +0000 UTC" firstStartedPulling="2026-01-29 12:07:12.263711868 +0000 UTC m=+2769.376104879" lastFinishedPulling="2026-01-29 12:07:13.662959449 +0000 UTC m=+2770.775352460" observedRunningTime="2026-01-29 12:07:14.325096585 +0000 UTC m=+2771.437489616" watchObservedRunningTime="2026-01-29 12:07:14.32909053 +0000 UTC m=+2771.441483551" Jan 29 12:07:15 crc kubenswrapper[4766]: I0129 12:07:15.295351 4766 generic.go:334] "Generic (PLEG): container finished" podID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerID="9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839" exitCode=0 Jan 29 12:07:15 crc kubenswrapper[4766]: I0129 12:07:15.295544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerDied","Data":"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839"} Jan 29 12:07:16 crc kubenswrapper[4766]: I0129 12:07:16.303492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerStarted","Data":"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8"} Jan 29 12:07:16 crc kubenswrapper[4766]: I0129 12:07:16.326967 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwv9f" podStartSLOduration=2.840622879 podStartE2EDuration="5.326949121s" podCreationTimestamp="2026-01-29 12:07:11 +0000 UTC" firstStartedPulling="2026-01-29 12:07:13.272925765 +0000 UTC m=+2770.385318776" lastFinishedPulling="2026-01-29 12:07:15.759252007 +0000 UTC m=+2772.871645018" observedRunningTime="2026-01-29 12:07:16.322616545 +0000 UTC m=+2773.435009556" watchObservedRunningTime="2026-01-29 12:07:16.326949121 +0000 UTC m=+2773.439342132" Jan 29 12:07:16 crc kubenswrapper[4766]: I0129 12:07:16.362568 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:07:16 crc kubenswrapper[4766]: I0129 12:07:16.362634 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:07:20 crc kubenswrapper[4766]: I0129 12:07:20.713364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:20 crc kubenswrapper[4766]: I0129 12:07:20.713701 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:20 crc kubenswrapper[4766]: I0129 12:07:20.792051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:21 crc kubenswrapper[4766]: I0129 12:07:21.380816 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:21 crc kubenswrapper[4766]: I0129 12:07:21.428918 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:21 crc kubenswrapper[4766]: I0129 12:07:21.695658 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:21 crc kubenswrapper[4766]: I0129 12:07:21.695716 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:21 crc kubenswrapper[4766]: I0129 12:07:21.736052 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:22 crc kubenswrapper[4766]: I0129 12:07:22.385201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:23 crc kubenswrapper[4766]: I0129 12:07:23.349590 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bddxn" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="registry-server" containerID="cri-o://1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26" gracePeriod=2 Jan 29 12:07:23 crc kubenswrapper[4766]: I0129 12:07:23.425645 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.220806 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359022 4766 generic.go:334] "Generic (PLEG): container finished" podID="65b66950-6e3a-470a-b729-0591841a404b" containerID="1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26" exitCode=0 Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359083 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bddxn" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359088 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerDied","Data":"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26"} Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bddxn" event={"ID":"65b66950-6e3a-470a-b729-0591841a404b","Type":"ContainerDied","Data":"9bd06fd5a6fc9eeb48162b1e5a29be148a8251aaec331b73618f25a12e620cb8"} Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359168 4766 scope.go:117] "RemoveContainer" containerID="1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.359519 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwv9f" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="registry-server" containerID="cri-o://5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8" gracePeriod=2 Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.363031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content\") pod \"65b66950-6e3a-470a-b729-0591841a404b\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.363089 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities\") pod \"65b66950-6e3a-470a-b729-0591841a404b\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.363262 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd2qf\" (UniqueName: \"kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf\") pod \"65b66950-6e3a-470a-b729-0591841a404b\" (UID: \"65b66950-6e3a-470a-b729-0591841a404b\") " Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.364464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities" (OuterVolumeSpecName: "utilities") pod "65b66950-6e3a-470a-b729-0591841a404b" (UID: "65b66950-6e3a-470a-b729-0591841a404b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.375320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf" (OuterVolumeSpecName: "kube-api-access-fd2qf") pod "65b66950-6e3a-470a-b729-0591841a404b" (UID: "65b66950-6e3a-470a-b729-0591841a404b"). InnerVolumeSpecName "kube-api-access-fd2qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.378446 4766 scope.go:117] "RemoveContainer" containerID="eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.402506 4766 scope.go:117] "RemoveContainer" containerID="bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.418193 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65b66950-6e3a-470a-b729-0591841a404b" (UID: "65b66950-6e3a-470a-b729-0591841a404b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.432699 4766 scope.go:117] "RemoveContainer" containerID="1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26" Jan 29 12:07:24 crc kubenswrapper[4766]: E0129 12:07:24.433130 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26\": container with ID starting with 1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26 not found: ID does not exist" containerID="1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.433162 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26"} err="failed to get container status \"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26\": rpc error: code = NotFound desc = could not find container \"1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26\": container with ID starting with 1ddbd07272328b71463b6a37c6aaeda5e1d66a0473826bc693d4ab08739dfd26 not found: ID does not exist" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.433182 4766 scope.go:117] "RemoveContainer" containerID="eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b" Jan 29 12:07:24 crc kubenswrapper[4766]: E0129 12:07:24.433639 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b\": container with ID starting with eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b not found: ID does not exist" containerID="eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.433659 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b"} err="failed to get container status \"eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b\": rpc error: code = NotFound desc = could not find container \"eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b\": container with ID starting with eac8b434f18ac531c860e0847c79a7f330551b0688bd88946be54034a52a124b not found: ID does not exist" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.433671 4766 scope.go:117] "RemoveContainer" containerID="bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc" Jan 29 12:07:24 crc kubenswrapper[4766]: E0129 12:07:24.433911 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc\": container with ID starting with bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc not found: ID does not exist" containerID="bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.433934 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc"} err="failed to get container status \"bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc\": rpc error: code = NotFound desc = could not find container \"bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc\": container with ID starting with bb957b93e063ab8d69160d2b9859c594b04b874d84bab29a522f71a270c19efc not found: ID does not exist" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.464933 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.464974 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b66950-6e3a-470a-b729-0591841a404b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.464984 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd2qf\" (UniqueName: \"kubernetes.io/projected/65b66950-6e3a-470a-b729-0591841a404b-kube-api-access-fd2qf\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.699626 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:24 crc kubenswrapper[4766]: I0129 12:07:24.705199 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bddxn"] Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.243494 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b66950-6e3a-470a-b729-0591841a404b" path="/var/lib/kubelet/pods/65b66950-6e3a-470a-b729-0591841a404b/volumes" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.327440 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.368686 4766 generic.go:334] "Generic (PLEG): container finished" podID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerID="5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8" exitCode=0 Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.368722 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwv9f" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.368765 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerDied","Data":"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8"} Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.368794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwv9f" event={"ID":"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20","Type":"ContainerDied","Data":"5365cec0957cf4ff28f73c1b687d045fc6613bdff1094d17ef0e27f098c60f3d"} Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.368811 4766 scope.go:117] "RemoveContainer" containerID="5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.388834 4766 scope.go:117] "RemoveContainer" containerID="9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.415546 4766 scope.go:117] "RemoveContainer" containerID="0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.434989 4766 scope.go:117] "RemoveContainer" containerID="5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8" Jan 29 12:07:25 crc kubenswrapper[4766]: E0129 12:07:25.435656 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8\": container with ID starting with 5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8 not found: ID does not exist" containerID="5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.435692 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8"} err="failed to get container status \"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8\": rpc error: code = NotFound desc = could not find container \"5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8\": container with ID starting with 5e72535885aaab4710804376e97b8ea140d4c9dbf685df6895a4066ff0c4a0d8 not found: ID does not exist" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.435722 4766 scope.go:117] "RemoveContainer" containerID="9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839" Jan 29 12:07:25 crc kubenswrapper[4766]: E0129 12:07:25.436030 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839\": container with ID starting with 9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839 not found: ID does not exist" containerID="9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.436081 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839"} err="failed to get container status \"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839\": rpc error: code = NotFound desc = could not find container \"9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839\": container with ID starting with 9eb5d3bdfe6fd80b0e2d48e1cdab009ce8dafaca415631de6a25f43bc10e7839 not found: ID does not exist" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.436103 4766 scope.go:117] "RemoveContainer" containerID="0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96" Jan 29 12:07:25 crc kubenswrapper[4766]: E0129 12:07:25.436565 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96\": container with ID starting with 0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96 not found: ID does not exist" containerID="0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.436611 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96"} err="failed to get container status \"0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96\": rpc error: code = NotFound desc = could not find container \"0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96\": container with ID starting with 0388dddb37e50b2902b70eb8da5c4dcaa7ca434c97c33477289d502ff813af96 not found: ID does not exist" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.486076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content\") pod \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.486228 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities\") pod \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.486312 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fvpv\" (UniqueName: \"kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv\") pod \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\" (UID: \"ff5cc63b-b4ad-42e5-b920-8fcec7abfa20\") " Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.487271 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities" (OuterVolumeSpecName: "utilities") pod "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" (UID: "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.489645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv" (OuterVolumeSpecName: "kube-api-access-7fvpv") pod "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" (UID: "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20"). InnerVolumeSpecName "kube-api-access-7fvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.510101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" (UID: "ff5cc63b-b4ad-42e5-b920-8fcec7abfa20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.588072 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.588124 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.588141 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fvpv\" (UniqueName: \"kubernetes.io/projected/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20-kube-api-access-7fvpv\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.704784 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:25 crc kubenswrapper[4766]: I0129 12:07:25.712744 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwv9f"] Jan 29 12:07:27 crc kubenswrapper[4766]: I0129 12:07:27.231718 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" path="/var/lib/kubelet/pods/ff5cc63b-b4ad-42e5-b920-8fcec7abfa20/volumes" Jan 29 12:07:46 crc kubenswrapper[4766]: I0129 12:07:46.362226 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:07:46 crc kubenswrapper[4766]: I0129 12:07:46.362836 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.361872 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.363442 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.363588 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.364315 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.364503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae" gracePeriod=600 Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.738548 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae" exitCode=0 Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.738603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae"} Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.738642 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9"} Jan 29 12:08:16 crc kubenswrapper[4766]: I0129 12:08:16.738730 4766 scope.go:117] "RemoveContainer" containerID="89e27c699a97296c95cfbdd2ee799b29462a5edf91e4f08b2ff33a17f796e191" Jan 29 12:10:16 crc kubenswrapper[4766]: I0129 12:10:16.362937 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:16 crc kubenswrapper[4766]: I0129 12:10:16.363994 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.002821 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003751 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="extract-utilities" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003769 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="extract-utilities" Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003779 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003786 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003807 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="extract-content" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003815 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="extract-content" Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003825 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003831 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003848 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="extract-utilities" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003854 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="extract-utilities" Jan 29 12:10:34 crc kubenswrapper[4766]: E0129 12:10:34.003867 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="extract-content" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.003873 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="extract-content" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.004000 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5cc63b-b4ad-42e5-b920-8fcec7abfa20" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.004020 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b66950-6e3a-470a-b729-0591841a404b" containerName="registry-server" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.004990 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.021375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.167206 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jmq\" (UniqueName: \"kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.167579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.167758 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.269558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.270008 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jmq\" (UniqueName: \"kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.270134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.270140 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.270385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.291333 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jmq\" (UniqueName: \"kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq\") pod \"redhat-operators-fbcv8\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.325975 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:34 crc kubenswrapper[4766]: I0129 12:10:34.782234 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:35 crc kubenswrapper[4766]: I0129 12:10:35.073739 4766 generic.go:334] "Generic (PLEG): container finished" podID="c421068d-238b-4b02-99ef-440800040ff1" containerID="ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90" exitCode=0 Jan 29 12:10:35 crc kubenswrapper[4766]: I0129 12:10:35.073813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerDied","Data":"ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90"} Jan 29 12:10:35 crc kubenswrapper[4766]: I0129 12:10:35.074625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerStarted","Data":"4d67025e71e1b27dcef31f830d5fd5d16186f0de38c81c230ac72635ae130cab"} Jan 29 12:10:36 crc kubenswrapper[4766]: I0129 12:10:36.085495 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerStarted","Data":"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a"} Jan 29 12:10:37 crc kubenswrapper[4766]: I0129 12:10:37.098104 4766 generic.go:334] "Generic (PLEG): container finished" podID="c421068d-238b-4b02-99ef-440800040ff1" containerID="b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a" exitCode=0 Jan 29 12:10:37 crc kubenswrapper[4766]: I0129 12:10:37.098207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerDied","Data":"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a"} Jan 29 12:10:38 crc kubenswrapper[4766]: I0129 12:10:38.109491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerStarted","Data":"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3"} Jan 29 12:10:38 crc kubenswrapper[4766]: I0129 12:10:38.133182 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbcv8" podStartSLOduration=2.609131504 podStartE2EDuration="5.133158861s" podCreationTimestamp="2026-01-29 12:10:33 +0000 UTC" firstStartedPulling="2026-01-29 12:10:35.075240119 +0000 UTC m=+2972.187633130" lastFinishedPulling="2026-01-29 12:10:37.599267476 +0000 UTC m=+2974.711660487" observedRunningTime="2026-01-29 12:10:38.130551387 +0000 UTC m=+2975.242944418" watchObservedRunningTime="2026-01-29 12:10:38.133158861 +0000 UTC m=+2975.245551872" Jan 29 12:10:44 crc kubenswrapper[4766]: I0129 12:10:44.327104 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:44 crc kubenswrapper[4766]: I0129 12:10:44.327489 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:44 crc kubenswrapper[4766]: I0129 12:10:44.369333 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:45 crc kubenswrapper[4766]: I0129 12:10:45.192241 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:45 crc kubenswrapper[4766]: I0129 12:10:45.238297 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:46 crc kubenswrapper[4766]: I0129 12:10:46.361865 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:46 crc kubenswrapper[4766]: I0129 12:10:46.361981 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:10:47 crc kubenswrapper[4766]: I0129 12:10:47.165006 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbcv8" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="registry-server" containerID="cri-o://58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3" gracePeriod=2 Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.649842 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.749975 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content\") pod \"c421068d-238b-4b02-99ef-440800040ff1\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.750098 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94jmq\" (UniqueName: \"kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq\") pod \"c421068d-238b-4b02-99ef-440800040ff1\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.750156 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities\") pod \"c421068d-238b-4b02-99ef-440800040ff1\" (UID: \"c421068d-238b-4b02-99ef-440800040ff1\") " Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.751318 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities" (OuterVolumeSpecName: "utilities") pod "c421068d-238b-4b02-99ef-440800040ff1" (UID: "c421068d-238b-4b02-99ef-440800040ff1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.758041 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq" (OuterVolumeSpecName: "kube-api-access-94jmq") pod "c421068d-238b-4b02-99ef-440800040ff1" (UID: "c421068d-238b-4b02-99ef-440800040ff1"). InnerVolumeSpecName "kube-api-access-94jmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.852160 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94jmq\" (UniqueName: \"kubernetes.io/projected/c421068d-238b-4b02-99ef-440800040ff1-kube-api-access-94jmq\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.852191 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.875860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c421068d-238b-4b02-99ef-440800040ff1" (UID: "c421068d-238b-4b02-99ef-440800040ff1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:48 crc kubenswrapper[4766]: I0129 12:10:48.953482 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c421068d-238b-4b02-99ef-440800040ff1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.187944 4766 generic.go:334] "Generic (PLEG): container finished" podID="c421068d-238b-4b02-99ef-440800040ff1" containerID="58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3" exitCode=0 Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.188001 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbcv8" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.188002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerDied","Data":"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3"} Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.188274 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbcv8" event={"ID":"c421068d-238b-4b02-99ef-440800040ff1","Type":"ContainerDied","Data":"4d67025e71e1b27dcef31f830d5fd5d16186f0de38c81c230ac72635ae130cab"} Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.188301 4766 scope.go:117] "RemoveContainer" containerID="58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.222528 4766 scope.go:117] "RemoveContainer" containerID="b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.238932 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.250605 4766 scope.go:117] "RemoveContainer" containerID="ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.257450 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbcv8"] Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.266832 4766 scope.go:117] "RemoveContainer" containerID="58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3" Jan 29 12:10:49 crc kubenswrapper[4766]: E0129 12:10:49.267491 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3\": container with ID starting with 58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3 not found: ID does not exist" containerID="58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.267552 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3"} err="failed to get container status \"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3\": rpc error: code = NotFound desc = could not find container \"58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3\": container with ID starting with 58d5faae7f74d082a0956788b94c2d40488f706eebcf621eff8445d2bc275eb3 not found: ID does not exist" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.267586 4766 scope.go:117] "RemoveContainer" containerID="b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a" Jan 29 12:10:49 crc kubenswrapper[4766]: E0129 12:10:49.268119 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a\": container with ID starting with b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a not found: ID does not exist" containerID="b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.268155 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a"} err="failed to get container status \"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a\": rpc error: code = NotFound desc = could not find container \"b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a\": container with ID starting with b3250b5d1f8f5957a2e309b71040f8ccc166b0f98557611209e8887cc3ed330a not found: ID does not exist" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.268180 4766 scope.go:117] "RemoveContainer" containerID="ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90" Jan 29 12:10:49 crc kubenswrapper[4766]: E0129 12:10:49.268471 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90\": container with ID starting with ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90 not found: ID does not exist" containerID="ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90" Jan 29 12:10:49 crc kubenswrapper[4766]: I0129 12:10:49.268502 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90"} err="failed to get container status \"ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90\": rpc error: code = NotFound desc = could not find container \"ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90\": container with ID starting with ba95b99f0b3be4ad92023a817b9f53ac4cf87a40116c2e359e4f044d1550fb90 not found: ID does not exist" Jan 29 12:10:51 crc kubenswrapper[4766]: I0129 12:10:51.233071 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c421068d-238b-4b02-99ef-440800040ff1" path="/var/lib/kubelet/pods/c421068d-238b-4b02-99ef-440800040ff1/volumes" Jan 29 12:11:16 crc kubenswrapper[4766]: I0129 12:11:16.361931 4766 patch_prober.go:28] interesting pod/machine-config-daemon-npgg8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:16 crc kubenswrapper[4766]: I0129 12:11:16.362677 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:16 crc kubenswrapper[4766]: I0129 12:11:16.362733 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" Jan 29 12:11:16 crc kubenswrapper[4766]: I0129 12:11:16.363381 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9"} pod="openshift-machine-config-operator/machine-config-daemon-npgg8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:11:16 crc kubenswrapper[4766]: I0129 12:11:16.363461 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerName="machine-config-daemon" containerID="cri-o://ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" gracePeriod=600 Jan 29 12:11:17 crc kubenswrapper[4766]: E0129 12:11:17.044769 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:11:17 crc kubenswrapper[4766]: I0129 12:11:17.379775 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" exitCode=0 Jan 29 12:11:17 crc kubenswrapper[4766]: I0129 12:11:17.379882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerDied","Data":"ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9"} Jan 29 12:11:17 crc kubenswrapper[4766]: I0129 12:11:17.379952 4766 scope.go:117] "RemoveContainer" containerID="6384f324226c58cf49617538784c9b07815b46089ee6a92aa42286684fbc6cae" Jan 29 12:11:17 crc kubenswrapper[4766]: I0129 12:11:17.380659 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:11:17 crc kubenswrapper[4766]: E0129 12:11:17.380934 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:11:30 crc kubenswrapper[4766]: I0129 12:11:30.225509 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:11:30 crc kubenswrapper[4766]: E0129 12:11:30.226315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:11:44 crc kubenswrapper[4766]: I0129 12:11:44.224278 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:11:44 crc kubenswrapper[4766]: E0129 12:11:44.225141 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.045737 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-svqzc/must-gather-cm5fs"] Jan 29 12:11:49 crc kubenswrapper[4766]: E0129 12:11:49.047583 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="extract-utilities" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.047694 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="extract-utilities" Jan 29 12:11:49 crc kubenswrapper[4766]: E0129 12:11:49.047775 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="registry-server" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.047836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="registry-server" Jan 29 12:11:49 crc kubenswrapper[4766]: E0129 12:11:49.047922 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="extract-content" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.047982 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="extract-content" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.048227 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c421068d-238b-4b02-99ef-440800040ff1" containerName="registry-server" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.049274 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.051768 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-svqzc"/"default-dockercfg-6sgdq" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.052090 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-svqzc"/"kube-root-ca.crt" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.052199 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-svqzc"/"openshift-service-ca.crt" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.085347 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-svqzc/must-gather-cm5fs"] Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.149705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx4nq\" (UniqueName: \"kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.149766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.251208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx4nq\" (UniqueName: \"kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.252151 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.252815 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.280499 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx4nq\" (UniqueName: \"kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq\") pod \"must-gather-cm5fs\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.368261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:11:49 crc kubenswrapper[4766]: I0129 12:11:49.819980 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-svqzc/must-gather-cm5fs"] Jan 29 12:11:49 crc kubenswrapper[4766]: W0129 12:11:49.826656 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb72ff7_f357_4c9e_bcc8_80566b79f096.slice/crio-204dc393a341d6c361101068871805375a46ccab414e4fb312d2455ee4a2ea34 WatchSource:0}: Error finding container 204dc393a341d6c361101068871805375a46ccab414e4fb312d2455ee4a2ea34: Status 404 returned error can't find the container with id 204dc393a341d6c361101068871805375a46ccab414e4fb312d2455ee4a2ea34 Jan 29 12:11:50 crc kubenswrapper[4766]: I0129 12:11:50.602762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svqzc/must-gather-cm5fs" event={"ID":"8fb72ff7-f357-4c9e-bcc8-80566b79f096","Type":"ContainerStarted","Data":"204dc393a341d6c361101068871805375a46ccab414e4fb312d2455ee4a2ea34"} Jan 29 12:11:57 crc kubenswrapper[4766]: I0129 12:11:57.224754 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:11:57 crc kubenswrapper[4766]: E0129 12:11:57.225454 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:11:59 crc kubenswrapper[4766]: I0129 12:11:59.669786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svqzc/must-gather-cm5fs" event={"ID":"8fb72ff7-f357-4c9e-bcc8-80566b79f096","Type":"ContainerStarted","Data":"38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a"} Jan 29 12:11:59 crc kubenswrapper[4766]: I0129 12:11:59.670350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svqzc/must-gather-cm5fs" event={"ID":"8fb72ff7-f357-4c9e-bcc8-80566b79f096","Type":"ContainerStarted","Data":"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0"} Jan 29 12:11:59 crc kubenswrapper[4766]: I0129 12:11:59.692574 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-svqzc/must-gather-cm5fs" podStartSLOduration=2.058206644 podStartE2EDuration="10.692548383s" podCreationTimestamp="2026-01-29 12:11:49 +0000 UTC" firstStartedPulling="2026-01-29 12:11:49.830246309 +0000 UTC m=+3046.942639320" lastFinishedPulling="2026-01-29 12:11:58.464588048 +0000 UTC m=+3055.576981059" observedRunningTime="2026-01-29 12:11:59.685704327 +0000 UTC m=+3056.798097348" watchObservedRunningTime="2026-01-29 12:11:59.692548383 +0000 UTC m=+3056.804941404" Jan 29 12:12:07 crc kubenswrapper[4766]: I0129 12:12:07.968789 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:07 crc kubenswrapper[4766]: I0129 12:12:07.970806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:07 crc kubenswrapper[4766]: I0129 12:12:07.978707 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.020648 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bm6\" (UniqueName: \"kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.020833 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.020883 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.121312 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5bm6\" (UniqueName: \"kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.121393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.121451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.121964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.122253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.164643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5bm6\" (UniqueName: \"kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6\") pod \"community-operators-tkvz9\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.286928 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:08 crc kubenswrapper[4766]: I0129 12:12:08.798331 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:09 crc kubenswrapper[4766]: I0129 12:12:09.224464 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:12:09 crc kubenswrapper[4766]: E0129 12:12:09.225017 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:12:09 crc kubenswrapper[4766]: I0129 12:12:09.734294 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerStarted","Data":"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72"} Jan 29 12:12:09 crc kubenswrapper[4766]: I0129 12:12:09.734367 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerStarted","Data":"419654a7d7011c463d2d6b16e0eedf702b47fb9775e7e76253e2e4ad01d603f6"} Jan 29 12:12:10 crc kubenswrapper[4766]: I0129 12:12:10.742394 4766 generic.go:334] "Generic (PLEG): container finished" podID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerID="c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72" exitCode=0 Jan 29 12:12:10 crc kubenswrapper[4766]: I0129 12:12:10.742503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerDied","Data":"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72"} Jan 29 12:12:11 crc kubenswrapper[4766]: I0129 12:12:11.751911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerStarted","Data":"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253"} Jan 29 12:12:12 crc kubenswrapper[4766]: I0129 12:12:12.761394 4766 generic.go:334] "Generic (PLEG): container finished" podID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerID="91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253" exitCode=0 Jan 29 12:12:12 crc kubenswrapper[4766]: I0129 12:12:12.761456 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerDied","Data":"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253"} Jan 29 12:12:12 crc kubenswrapper[4766]: I0129 12:12:12.763059 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:12:13 crc kubenswrapper[4766]: I0129 12:12:13.770608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerStarted","Data":"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e"} Jan 29 12:12:13 crc kubenswrapper[4766]: I0129 12:12:13.795170 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkvz9" podStartSLOduration=4.332538863 podStartE2EDuration="6.795152025s" podCreationTimestamp="2026-01-29 12:12:07 +0000 UTC" firstStartedPulling="2026-01-29 12:12:10.744280065 +0000 UTC m=+3067.856673076" lastFinishedPulling="2026-01-29 12:12:13.206893227 +0000 UTC m=+3070.319286238" observedRunningTime="2026-01-29 12:12:13.789502663 +0000 UTC m=+3070.901895684" watchObservedRunningTime="2026-01-29 12:12:13.795152025 +0000 UTC m=+3070.907545036" Jan 29 12:12:18 crc kubenswrapper[4766]: I0129 12:12:18.287892 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:18 crc kubenswrapper[4766]: I0129 12:12:18.288357 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:18 crc kubenswrapper[4766]: I0129 12:12:18.336528 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:18 crc kubenswrapper[4766]: I0129 12:12:18.874051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:18 crc kubenswrapper[4766]: I0129 12:12:18.925159 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:20 crc kubenswrapper[4766]: I0129 12:12:20.814355 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkvz9" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="registry-server" containerID="cri-o://671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e" gracePeriod=2 Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.157824 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.207559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5bm6\" (UniqueName: \"kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6\") pod \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.207611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities\") pod \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.207738 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content\") pod \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\" (UID: \"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1\") " Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.208459 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities" (OuterVolumeSpecName: "utilities") pod "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" (UID: "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.212737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6" (OuterVolumeSpecName: "kube-api-access-x5bm6") pod "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" (UID: "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1"). InnerVolumeSpecName "kube-api-access-x5bm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.273794 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" (UID: "1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.309912 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.309961 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5bm6\" (UniqueName: \"kubernetes.io/projected/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-kube-api-access-x5bm6\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.309973 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.824780 4766 generic.go:334] "Generic (PLEG): container finished" podID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerID="671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e" exitCode=0 Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.824836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerDied","Data":"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e"} Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.824881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvz9" event={"ID":"1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1","Type":"ContainerDied","Data":"419654a7d7011c463d2d6b16e0eedf702b47fb9775e7e76253e2e4ad01d603f6"} Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.824905 4766 scope.go:117] "RemoveContainer" containerID="671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.825933 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvz9" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.846864 4766 scope.go:117] "RemoveContainer" containerID="91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.871758 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.878155 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkvz9"] Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.884358 4766 scope.go:117] "RemoveContainer" containerID="c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.905500 4766 scope.go:117] "RemoveContainer" containerID="671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e" Jan 29 12:12:21 crc kubenswrapper[4766]: E0129 12:12:21.906041 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e\": container with ID starting with 671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e not found: ID does not exist" containerID="671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.906069 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e"} err="failed to get container status \"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e\": rpc error: code = NotFound desc = could not find container \"671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e\": container with ID starting with 671af23ea6f1f696a3c6602f661555fbf70f6547ecaa6cee95a0718c85873c7e not found: ID does not exist" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.906094 4766 scope.go:117] "RemoveContainer" containerID="91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253" Jan 29 12:12:21 crc kubenswrapper[4766]: E0129 12:12:21.906393 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253\": container with ID starting with 91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253 not found: ID does not exist" containerID="91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.906510 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253"} err="failed to get container status \"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253\": rpc error: code = NotFound desc = could not find container \"91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253\": container with ID starting with 91a916c8c365d40d46149c00b9ab3f4c3ebdeefa9ba17c6ece5b2114fdc55253 not found: ID does not exist" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.906603 4766 scope.go:117] "RemoveContainer" containerID="c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72" Jan 29 12:12:21 crc kubenswrapper[4766]: E0129 12:12:21.906955 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72\": container with ID starting with c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72 not found: ID does not exist" containerID="c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72" Jan 29 12:12:21 crc kubenswrapper[4766]: I0129 12:12:21.906977 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72"} err="failed to get container status \"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72\": rpc error: code = NotFound desc = could not find container \"c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72\": container with ID starting with c342f6b3a008d1c37bc64a2af634cda908cf52cb2e9bd0701dcff7f626518d72 not found: ID does not exist" Jan 29 12:12:22 crc kubenswrapper[4766]: I0129 12:12:22.224644 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:12:22 crc kubenswrapper[4766]: E0129 12:12:22.224960 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:12:23 crc kubenswrapper[4766]: I0129 12:12:23.234859 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" path="/var/lib/kubelet/pods/1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1/volumes" Jan 29 12:12:35 crc kubenswrapper[4766]: I0129 12:12:35.228806 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:12:35 crc kubenswrapper[4766]: E0129 12:12:35.229715 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:12:47 crc kubenswrapper[4766]: I0129 12:12:47.226245 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:12:47 crc kubenswrapper[4766]: E0129 12:12:47.227880 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.233618 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/util/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.470039 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/util/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.475777 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/pull/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.530242 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/pull/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.711322 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/pull/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.714802 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/extract/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.730895 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0c5ac011c3ca3e9d4d53ea2b6adcaf934f57b4215700b960339a071705st56_cb2e1ea2-2471-4e0d-93ac-36ac457e1d59/util/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.954341 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-hn2zr_0f72b13e-c9ce-4ada-ace2-432d17b8784e/manager/0.log" Jan 29 12:12:59 crc kubenswrapper[4766]: I0129 12:12:59.998392 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-zqq8r_7d553af8-9c25-432a-bb68-5402fbd6221e/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.162842 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-4j7sq_19efe92b-6dae-4b62-920b-0348877b5217/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.262859 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-rgj67_ab5afb01-eba4-4480-a437-4d2e0cdb16bb/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.391316 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-zw2qx_767a94c6-6767-4dc9-9054-70945f39e248/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.443391 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-gwttp_8e6fc747-e7e2-438d-a00e-3ab94b806035/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.688274 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-c5gtf_c2f7549b-08ae-4ec0-96ac-25997e35d30e/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.817191 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-jhhql_fa222cda-3f2c-49fb-9d14-466dce8c9c40/manager/0.log" Jan 29 12:13:00 crc kubenswrapper[4766]: I0129 12:13:00.963206 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-cb57l_c98f5447-fb23-4b08-b5a8-70bce28d9bb7/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.019602 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-kbqkb_a9f5e2bf-dd4c-405f-9a1c-439c3abea9f6/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.190696 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-gpc9d_d93f68fe-b726-4e2d-afa4-9b789a96dc55/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.273470 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-g5nzh_a3ac13ec-bcf6-40f8-be96-d4302334f324/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.508206 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-lc5rh_16096f77-0fe2-498f-8b86-480d699b9fd6/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.511976 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-nn778_38e05981-7669-4ee4-af1a-8ba826587cda/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.658152 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dkprz2_8b1a55ff-1c16-4f2d-a92a-f00adeff5423/manager/0.log" Jan 29 12:13:01 crc kubenswrapper[4766]: I0129 12:13:01.846844 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f45dc54dc-g5pbs_e5e2bd8b-a38a-406c-b237-ff9e369d107c/operator/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.079056 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zg86v_8fe94dcf-0b49-4bba-b077-aff75fd5ae19/registry-server/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.227063 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:13:02 crc kubenswrapper[4766]: E0129 12:13:02.231613 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.325539 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-bcpmv_dbcef236-480a-41a2-8462-4695dc762ed1/manager/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.397335 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-gth8s_eed499c2-b0c1-4fbb-b0e6-543d9f1ac230/manager/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.593781 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2xvf8_df93cd48-3695-40d8-a9e5-7321f57034ed/operator/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.649519 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66cc5c7d8c-wrglp_bd3e0529-a99d-4174-a9b1-7a937bf09579/manager/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.832953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-8xvmt_215d318f-0aac-4fa1-9d80-c162d3922e62/manager/0.log" Jan 29 12:13:02 crc kubenswrapper[4766]: I0129 12:13:02.943816 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-w7n2r_1d209024-1c33-4b4e-af1c-71c6039e69c9/manager/0.log" Jan 29 12:13:03 crc kubenswrapper[4766]: I0129 12:13:03.027086 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-6snl2_cacfa742-e2bf-48f3-8da2-3a6f7d66f60e/manager/0.log" Jan 29 12:13:03 crc kubenswrapper[4766]: I0129 12:13:03.136139 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zgjhb_248cfd14-922e-4123-9a39-849d292613f0/manager/0.log" Jan 29 12:13:13 crc kubenswrapper[4766]: I0129 12:13:13.224217 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:13:13 crc kubenswrapper[4766]: E0129 12:13:13.224966 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:13:22 crc kubenswrapper[4766]: I0129 12:13:22.421112 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kgqmk_b6b1d6a6-3e31-4fcf-88e2-d73f910a77ef/control-plane-machine-set-operator/0.log" Jan 29 12:13:22 crc kubenswrapper[4766]: I0129 12:13:22.602548 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q65jj_22f4cece-ea69-4c25-b492-8d03d960353e/kube-rbac-proxy/0.log" Jan 29 12:13:22 crc kubenswrapper[4766]: I0129 12:13:22.645013 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q65jj_22f4cece-ea69-4c25-b492-8d03d960353e/machine-api-operator/0.log" Jan 29 12:13:28 crc kubenswrapper[4766]: I0129 12:13:28.225508 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:13:28 crc kubenswrapper[4766]: E0129 12:13:28.226909 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:13:35 crc kubenswrapper[4766]: I0129 12:13:35.446098 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-fktjr_02af44f9-cf78-4a95-ac39-e6012cb5446a/cert-manager-controller/0.log" Jan 29 12:13:35 crc kubenswrapper[4766]: I0129 12:13:35.633753 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-6fb75_1ec76af5-13ae-4c4e-8242-66f1df583a46/cert-manager-cainjector/0.log" Jan 29 12:13:35 crc kubenswrapper[4766]: I0129 12:13:35.698122 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hgrv9_7b86c740-3495-4d4e-9205-2940c91abcb2/cert-manager-webhook/0.log" Jan 29 12:13:43 crc kubenswrapper[4766]: I0129 12:13:43.224631 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:13:43 crc kubenswrapper[4766]: E0129 12:13:43.226432 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.515572 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-v72pq_603cfc4d-f620-41af-98bc-06d98fcaa229/nmstate-console-plugin/0.log" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.634909 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mnjxp_8bf08f13-11cf-4a07-b66f-f36591ae076e/nmstate-handler/0.log" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.676906 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-48dfn_49980fb9-2330-4be5-9d44-e308d3f2d79b/kube-rbac-proxy/0.log" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.722533 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-48dfn_49980fb9-2330-4be5-9d44-e308d3f2d79b/nmstate-metrics/0.log" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.869424 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-2wgfm_eb648e32-b2f9-44e3-8a32-fd27af7c41cc/nmstate-operator/0.log" Jan 29 12:13:48 crc kubenswrapper[4766]: I0129 12:13:48.910684 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6gndn_f301b6de-8128-43bf-b3cd-92e1ad13b932/nmstate-webhook/0.log" Jan 29 12:13:54 crc kubenswrapper[4766]: I0129 12:13:54.224987 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:13:54 crc kubenswrapper[4766]: E0129 12:13:54.225724 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:14:07 crc kubenswrapper[4766]: I0129 12:14:07.225039 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:14:07 crc kubenswrapper[4766]: E0129 12:14:07.225812 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:14:14 crc kubenswrapper[4766]: I0129 12:14:14.611252 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n8gpm_05465246-85ae-41ab-8696-d92c3e8f1231/kube-rbac-proxy/0.log" Jan 29 12:14:14 crc kubenswrapper[4766]: I0129 12:14:14.902067 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-frr-files/0.log" Jan 29 12:14:14 crc kubenswrapper[4766]: I0129 12:14:14.941109 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n8gpm_05465246-85ae-41ab-8696-d92c3e8f1231/controller/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.055202 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-frr-files/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.058714 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-reloader/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.094530 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-metrics/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.137044 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-reloader/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.359209 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-metrics/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.365259 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-frr-files/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.372364 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-reloader/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.408515 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-metrics/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.561250 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-reloader/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.561310 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-frr-files/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.564243 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/cp-metrics/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.585378 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/controller/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.730399 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/frr-metrics/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.757698 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/kube-rbac-proxy/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.804164 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/kube-rbac-proxy-frr/0.log" Jan 29 12:14:15 crc kubenswrapper[4766]: I0129 12:14:15.915272 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/reloader/0.log" Jan 29 12:14:16 crc kubenswrapper[4766]: I0129 12:14:16.011604 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-lfzb5_865e99ee-6f4f-47b1-bd58-86910c5f3b83/frr-k8s-webhook-server/0.log" Jan 29 12:14:16 crc kubenswrapper[4766]: I0129 12:14:16.266481 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7776d7d99d-7t5gz_18955c8a-3096-4daa-8173-5d90205581b7/manager/0.log" Jan 29 12:14:16 crc kubenswrapper[4766]: I0129 12:14:16.340890 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-78b44f4d5f-h2wr6_41ed541e-e7d7-4dfb-bfeb-5b7492fa1a0b/webhook-server/0.log" Jan 29 12:14:16 crc kubenswrapper[4766]: I0129 12:14:16.495953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6rfrd_28dff331-a770-4c20-b111-608aad657cf7/kube-rbac-proxy/0.log" Jan 29 12:14:16 crc kubenswrapper[4766]: I0129 12:14:16.978699 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mnrnx_f683d2c1-09f9-488a-9361-44f876f7a61a/frr/0.log" Jan 29 12:14:17 crc kubenswrapper[4766]: I0129 12:14:17.029201 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6rfrd_28dff331-a770-4c20-b111-608aad657cf7/speaker/0.log" Jan 29 12:14:20 crc kubenswrapper[4766]: I0129 12:14:20.225028 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:14:20 crc kubenswrapper[4766]: E0129 12:14:20.225935 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.219557 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/util/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.331328 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/pull/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.389452 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/util/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.488635 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/pull/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.676217 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/util/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.682435 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/extract/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.694115 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4xvdz_a5d54a74-7c01-406c-9b46-c2dd7df8fb9e/pull/0.log" Jan 29 12:14:29 crc kubenswrapper[4766]: I0129 12:14:29.885057 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.094364 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.102500 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.104989 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.284183 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.309172 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/extract/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.455155 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zpww6_11a99c06-5b9b-475a-b0e8-528d1e8a9eb6/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.492651 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.721073 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.726809 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.729072 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.852850 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/util/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.918534 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/pull/0.log" Jan 29 12:14:30 crc kubenswrapper[4766]: I0129 12:14:30.934923 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m2l2b_498cad84-d6b7-4732-bdee-39dec01c2829/extract/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.056363 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-utilities/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.206092 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-utilities/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.210258 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-content/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.251829 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-content/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.446764 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-utilities/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.453443 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/extract-content/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.867654 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-utilities/0.log" Jan 29 12:14:31 crc kubenswrapper[4766]: I0129 12:14:31.892691 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c6zxp_498c7200-d206-4ace-8627-99ae72a379ce/registry-server/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.050303 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-content/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.068424 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-content/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.089002 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-utilities/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.357747 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-utilities/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.386691 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/extract-content/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.593342 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-56hgk_eb7df4c5-66c5-4c8e-a19e-b37a0fad40d8/marketplace-operator/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.741203 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-utilities/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.966264 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-utilities/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.981948 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-content/0.log" Jan 29 12:14:32 crc kubenswrapper[4766]: I0129 12:14:32.998392 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bf677_c25ea5fb-edce-471f-a010-c07f32090ee8/registry-server/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.004358 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-content/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.165302 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-utilities/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.168191 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/extract-content/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.373753 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7gq4m_f3a5d5d4-41a2-4d0d-b915-aff5d9200703/registry-server/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.385809 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-utilities/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.559503 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-content/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.569749 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-utilities/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.577672 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-content/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.738820 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-content/0.log" Jan 29 12:14:33 crc kubenswrapper[4766]: I0129 12:14:33.756714 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/extract-utilities/0.log" Jan 29 12:14:34 crc kubenswrapper[4766]: I0129 12:14:34.169841 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l554z_0253132b-88f5-4f77-8bd4-5effddcdd170/registry-server/0.log" Jan 29 12:14:34 crc kubenswrapper[4766]: I0129 12:14:34.226064 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:14:34 crc kubenswrapper[4766]: E0129 12:14:34.226434 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:14:45 crc kubenswrapper[4766]: I0129 12:14:45.230065 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:14:45 crc kubenswrapper[4766]: E0129 12:14:45.232715 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:14:57 crc kubenswrapper[4766]: I0129 12:14:57.225951 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:14:57 crc kubenswrapper[4766]: E0129 12:14:57.227754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.143650 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp"] Jan 29 12:15:00 crc kubenswrapper[4766]: E0129 12:15:00.144291 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="extract-content" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.144305 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="extract-content" Jan 29 12:15:00 crc kubenswrapper[4766]: E0129 12:15:00.144327 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="extract-utilities" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.144334 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="extract-utilities" Jan 29 12:15:00 crc kubenswrapper[4766]: E0129 12:15:00.144346 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="registry-server" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.144353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="registry-server" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.144516 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a308bf1-8dbf-4e54-a8f6-a908eb04c9f1" containerName="registry-server" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.145062 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.147084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.147511 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.159829 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp"] Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.235710 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2948\" (UniqueName: \"kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.235778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.235806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.337509 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2948\" (UniqueName: \"kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.337594 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.337615 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.338643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.352071 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.368944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2948\" (UniqueName: \"kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948\") pod \"collect-profiles-29494815-qsgvp\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.466285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:00 crc kubenswrapper[4766]: I0129 12:15:00.985349 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp"] Jan 29 12:15:01 crc kubenswrapper[4766]: I0129 12:15:01.104812 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" event={"ID":"cc5efa96-2bbb-415b-b26f-1e2403df7502","Type":"ContainerStarted","Data":"8d85eae61156efca77d61135672c6d17cc99e0f74c875b97e0c0f0e26fd1dccf"} Jan 29 12:15:02 crc kubenswrapper[4766]: I0129 12:15:02.114947 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc5efa96-2bbb-415b-b26f-1e2403df7502" containerID="16e741ed6c04905c0d6bbbfbbdd9b08b9dbca4b3c12d28f6e6c9616d4c675b9a" exitCode=0 Jan 29 12:15:02 crc kubenswrapper[4766]: I0129 12:15:02.115004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" event={"ID":"cc5efa96-2bbb-415b-b26f-1e2403df7502","Type":"ContainerDied","Data":"16e741ed6c04905c0d6bbbfbbdd9b08b9dbca4b3c12d28f6e6c9616d4c675b9a"} Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.461588 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.589242 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume\") pod \"cc5efa96-2bbb-415b-b26f-1e2403df7502\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.589296 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2948\" (UniqueName: \"kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948\") pod \"cc5efa96-2bbb-415b-b26f-1e2403df7502\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.589333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume\") pod \"cc5efa96-2bbb-415b-b26f-1e2403df7502\" (UID: \"cc5efa96-2bbb-415b-b26f-1e2403df7502\") " Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.590512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc5efa96-2bbb-415b-b26f-1e2403df7502" (UID: "cc5efa96-2bbb-415b-b26f-1e2403df7502"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.596707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc5efa96-2bbb-415b-b26f-1e2403df7502" (UID: "cc5efa96-2bbb-415b-b26f-1e2403df7502"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.597768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948" (OuterVolumeSpecName: "kube-api-access-x2948") pod "cc5efa96-2bbb-415b-b26f-1e2403df7502" (UID: "cc5efa96-2bbb-415b-b26f-1e2403df7502"). InnerVolumeSpecName "kube-api-access-x2948". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.691647 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc5efa96-2bbb-415b-b26f-1e2403df7502-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.691696 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2948\" (UniqueName: \"kubernetes.io/projected/cc5efa96-2bbb-415b-b26f-1e2403df7502-kube-api-access-x2948\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:03 crc kubenswrapper[4766]: I0129 12:15:03.691709 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc5efa96-2bbb-415b-b26f-1e2403df7502-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:04 crc kubenswrapper[4766]: I0129 12:15:04.134834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" event={"ID":"cc5efa96-2bbb-415b-b26f-1e2403df7502","Type":"ContainerDied","Data":"8d85eae61156efca77d61135672c6d17cc99e0f74c875b97e0c0f0e26fd1dccf"} Jan 29 12:15:04 crc kubenswrapper[4766]: I0129 12:15:04.135159 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d85eae61156efca77d61135672c6d17cc99e0f74c875b97e0c0f0e26fd1dccf" Jan 29 12:15:04 crc kubenswrapper[4766]: I0129 12:15:04.135201 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-qsgvp" Jan 29 12:15:04 crc kubenswrapper[4766]: I0129 12:15:04.542625 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z"] Jan 29 12:15:04 crc kubenswrapper[4766]: I0129 12:15:04.549891 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-m9w9z"] Jan 29 12:15:05 crc kubenswrapper[4766]: I0129 12:15:05.235535 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833ad5a8-865a-420a-8337-976684a1c9bd" path="/var/lib/kubelet/pods/833ad5a8-865a-420a-8337-976684a1c9bd/volumes" Jan 29 12:15:09 crc kubenswrapper[4766]: I0129 12:15:09.228228 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:15:09 crc kubenswrapper[4766]: E0129 12:15:09.229166 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:15:12 crc kubenswrapper[4766]: I0129 12:15:12.591087 4766 scope.go:117] "RemoveContainer" containerID="4dd839fac298626b3660bac0cbeaa67e24c1bf48eefaba2f46a958ba0ffff417" Jan 29 12:15:23 crc kubenswrapper[4766]: I0129 12:15:23.225565 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:15:23 crc kubenswrapper[4766]: E0129 12:15:23.226885 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:15:38 crc kubenswrapper[4766]: I0129 12:15:38.226772 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:15:38 crc kubenswrapper[4766]: E0129 12:15:38.227597 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:15:51 crc kubenswrapper[4766]: I0129 12:15:51.227358 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:15:51 crc kubenswrapper[4766]: E0129 12:15:51.228822 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:15:52 crc kubenswrapper[4766]: I0129 12:15:52.517563 4766 generic.go:334] "Generic (PLEG): container finished" podID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerID="e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0" exitCode=0 Jan 29 12:15:52 crc kubenswrapper[4766]: I0129 12:15:52.517698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svqzc/must-gather-cm5fs" event={"ID":"8fb72ff7-f357-4c9e-bcc8-80566b79f096","Type":"ContainerDied","Data":"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0"} Jan 29 12:15:52 crc kubenswrapper[4766]: I0129 12:15:52.518387 4766 scope.go:117] "RemoveContainer" containerID="e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0" Jan 29 12:15:53 crc kubenswrapper[4766]: I0129 12:15:53.450131 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svqzc_must-gather-cm5fs_8fb72ff7-f357-4c9e-bcc8-80566b79f096/gather/0.log" Jan 29 12:16:00 crc kubenswrapper[4766]: I0129 12:16:00.957307 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-svqzc/must-gather-cm5fs"] Jan 29 12:16:00 crc kubenswrapper[4766]: I0129 12:16:00.958148 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-svqzc/must-gather-cm5fs" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="copy" containerID="cri-o://38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a" gracePeriod=2 Jan 29 12:16:00 crc kubenswrapper[4766]: I0129 12:16:00.964969 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-svqzc/must-gather-cm5fs"] Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.358382 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svqzc_must-gather-cm5fs_8fb72ff7-f357-4c9e-bcc8-80566b79f096/copy/0.log" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.359106 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.468708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output\") pod \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.468846 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx4nq\" (UniqueName: \"kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq\") pod \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\" (UID: \"8fb72ff7-f357-4c9e-bcc8-80566b79f096\") " Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.474629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq" (OuterVolumeSpecName: "kube-api-access-xx4nq") pod "8fb72ff7-f357-4c9e-bcc8-80566b79f096" (UID: "8fb72ff7-f357-4c9e-bcc8-80566b79f096"). InnerVolumeSpecName "kube-api-access-xx4nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.572614 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx4nq\" (UniqueName: \"kubernetes.io/projected/8fb72ff7-f357-4c9e-bcc8-80566b79f096-kube-api-access-xx4nq\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.575190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8fb72ff7-f357-4c9e-bcc8-80566b79f096" (UID: "8fb72ff7-f357-4c9e-bcc8-80566b79f096"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.588583 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svqzc_must-gather-cm5fs_8fb72ff7-f357-4c9e-bcc8-80566b79f096/copy/0.log" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.592457 4766 generic.go:334] "Generic (PLEG): container finished" podID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerID="38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a" exitCode=143 Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.592556 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svqzc/must-gather-cm5fs" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.592560 4766 scope.go:117] "RemoveContainer" containerID="38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.618218 4766 scope.go:117] "RemoveContainer" containerID="e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.674661 4766 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fb72ff7-f357-4c9e-bcc8-80566b79f096-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.701969 4766 scope.go:117] "RemoveContainer" containerID="38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a" Jan 29 12:16:01 crc kubenswrapper[4766]: E0129 12:16:01.702744 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a\": container with ID starting with 38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a not found: ID does not exist" containerID="38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.702880 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a"} err="failed to get container status \"38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a\": rpc error: code = NotFound desc = could not find container \"38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a\": container with ID starting with 38c71cb6f4b84f4d653dce47b64aa617782e97acb55be4cba863516943cb252a not found: ID does not exist" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.702956 4766 scope.go:117] "RemoveContainer" containerID="e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0" Jan 29 12:16:01 crc kubenswrapper[4766]: E0129 12:16:01.703684 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0\": container with ID starting with e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0 not found: ID does not exist" containerID="e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0" Jan 29 12:16:01 crc kubenswrapper[4766]: I0129 12:16:01.703751 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0"} err="failed to get container status \"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0\": rpc error: code = NotFound desc = could not find container \"e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0\": container with ID starting with e680fb618ed3fdddbdfb06d9b0df7a96a449a104275cf143a0b85145d6b1e6f0 not found: ID does not exist" Jan 29 12:16:02 crc kubenswrapper[4766]: I0129 12:16:02.224478 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:16:02 crc kubenswrapper[4766]: E0129 12:16:02.224727 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:16:03 crc kubenswrapper[4766]: I0129 12:16:03.235637 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" path="/var/lib/kubelet/pods/8fb72ff7-f357-4c9e-bcc8-80566b79f096/volumes" Jan 29 12:16:14 crc kubenswrapper[4766]: I0129 12:16:14.224276 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:16:14 crc kubenswrapper[4766]: E0129 12:16:14.224991 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npgg8_openshift-machine-config-operator(5bdd08bb-d32c-44f7-b7f8-ff1664ea543a)\"" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" podUID="5bdd08bb-d32c-44f7-b7f8-ff1664ea543a" Jan 29 12:16:28 crc kubenswrapper[4766]: I0129 12:16:28.224102 4766 scope.go:117] "RemoveContainer" containerID="ef8ee4d66d4b0e197384a3b24ecd9bc7f815737fbd90a7d8bd2f68b9900878f9" Jan 29 12:16:28 crc kubenswrapper[4766]: I0129 12:16:28.783571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npgg8" event={"ID":"5bdd08bb-d32c-44f7-b7f8-ff1664ea543a","Type":"ContainerStarted","Data":"d109ddb004e5245cd67635141ad877c16bbe4b525e7088391268bfc2324a9270"} Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.985353 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:18 crc kubenswrapper[4766]: E0129 12:17:18.986390 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="copy" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986425 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="copy" Jan 29 12:17:18 crc kubenswrapper[4766]: E0129 12:17:18.986456 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="gather" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986465 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="gather" Jan 29 12:17:18 crc kubenswrapper[4766]: E0129 12:17:18.986497 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5efa96-2bbb-415b-b26f-1e2403df7502" containerName="collect-profiles" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986506 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5efa96-2bbb-415b-b26f-1e2403df7502" containerName="collect-profiles" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986665 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5efa96-2bbb-415b-b26f-1e2403df7502" containerName="collect-profiles" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986690 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="copy" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.986708 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb72ff7-f357-4c9e-bcc8-80566b79f096" containerName="gather" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.987947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:18 crc kubenswrapper[4766]: I0129 12:17:18.990760 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.108172 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqn7b\" (UniqueName: \"kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.109376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.109571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.211399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.211507 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.211597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqn7b\" (UniqueName: \"kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.212128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.212198 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.241551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqn7b\" (UniqueName: \"kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b\") pod \"certified-operators-vrb2p\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.307263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:19 crc kubenswrapper[4766]: I0129 12:17:19.743011 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:20 crc kubenswrapper[4766]: I0129 12:17:20.155183 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerStarted","Data":"f6d7123fa6e1d12fb54fe62b1466d1ffaa3c7e4d5ded9ed1bb70a7fe07033414"} Jan 29 12:17:21 crc kubenswrapper[4766]: I0129 12:17:21.164960 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ab27080-187a-43d2-b66e-a7cf2ff1414c" containerID="b85edafad758c8a46a73e4f302c678f8cb920f2c91e0aaab8060640a5d0d651b" exitCode=0 Jan 29 12:17:21 crc kubenswrapper[4766]: I0129 12:17:21.165319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerDied","Data":"b85edafad758c8a46a73e4f302c678f8cb920f2c91e0aaab8060640a5d0d651b"} Jan 29 12:17:21 crc kubenswrapper[4766]: I0129 12:17:21.168933 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:17:22 crc kubenswrapper[4766]: I0129 12:17:22.173979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerStarted","Data":"4f0405699fee5b5acfb4bd12415082a23fff4ee0a4e74f18947cd163b36c55b1"} Jan 29 12:17:23 crc kubenswrapper[4766]: I0129 12:17:23.182156 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ab27080-187a-43d2-b66e-a7cf2ff1414c" containerID="4f0405699fee5b5acfb4bd12415082a23fff4ee0a4e74f18947cd163b36c55b1" exitCode=0 Jan 29 12:17:23 crc kubenswrapper[4766]: I0129 12:17:23.182206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerDied","Data":"4f0405699fee5b5acfb4bd12415082a23fff4ee0a4e74f18947cd163b36c55b1"} Jan 29 12:17:24 crc kubenswrapper[4766]: I0129 12:17:24.191543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerStarted","Data":"8f7105b0ab53084855ca778cd3b98c6eb999632450fa0b971735d45ac3896d8c"} Jan 29 12:17:24 crc kubenswrapper[4766]: I0129 12:17:24.214841 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrb2p" podStartSLOduration=3.668305802 podStartE2EDuration="6.21482077s" podCreationTimestamp="2026-01-29 12:17:18 +0000 UTC" firstStartedPulling="2026-01-29 12:17:21.168333617 +0000 UTC m=+3378.280726668" lastFinishedPulling="2026-01-29 12:17:23.714848625 +0000 UTC m=+3380.827241636" observedRunningTime="2026-01-29 12:17:24.209574891 +0000 UTC m=+3381.321967912" watchObservedRunningTime="2026-01-29 12:17:24.21482077 +0000 UTC m=+3381.327213781" Jan 29 12:17:29 crc kubenswrapper[4766]: I0129 12:17:29.308180 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:29 crc kubenswrapper[4766]: I0129 12:17:29.308865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:29 crc kubenswrapper[4766]: I0129 12:17:29.351872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:30 crc kubenswrapper[4766]: I0129 12:17:30.296738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:30 crc kubenswrapper[4766]: I0129 12:17:30.346514 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:32 crc kubenswrapper[4766]: I0129 12:17:32.267130 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrb2p" podUID="5ab27080-187a-43d2-b66e-a7cf2ff1414c" containerName="registry-server" containerID="cri-o://8f7105b0ab53084855ca778cd3b98c6eb999632450fa0b971735d45ac3896d8c" gracePeriod=2 Jan 29 12:17:33 crc kubenswrapper[4766]: I0129 12:17:33.276003 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ab27080-187a-43d2-b66e-a7cf2ff1414c" containerID="8f7105b0ab53084855ca778cd3b98c6eb999632450fa0b971735d45ac3896d8c" exitCode=0 Jan 29 12:17:33 crc kubenswrapper[4766]: I0129 12:17:33.276107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerDied","Data":"8f7105b0ab53084855ca778cd3b98c6eb999632450fa0b971735d45ac3896d8c"} Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.510631 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.549533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities\") pod \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.549593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqn7b\" (UniqueName: \"kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b\") pod \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.549623 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content\") pod \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\" (UID: \"5ab27080-187a-43d2-b66e-a7cf2ff1414c\") " Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.550846 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities" (OuterVolumeSpecName: "utilities") pod "5ab27080-187a-43d2-b66e-a7cf2ff1414c" (UID: "5ab27080-187a-43d2-b66e-a7cf2ff1414c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.555464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b" (OuterVolumeSpecName: "kube-api-access-cqn7b") pod "5ab27080-187a-43d2-b66e-a7cf2ff1414c" (UID: "5ab27080-187a-43d2-b66e-a7cf2ff1414c"). InnerVolumeSpecName "kube-api-access-cqn7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.598806 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ab27080-187a-43d2-b66e-a7cf2ff1414c" (UID: "5ab27080-187a-43d2-b66e-a7cf2ff1414c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.651309 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.651370 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqn7b\" (UniqueName: \"kubernetes.io/projected/5ab27080-187a-43d2-b66e-a7cf2ff1414c-kube-api-access-cqn7b\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:34 crc kubenswrapper[4766]: I0129 12:17:34.651393 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab27080-187a-43d2-b66e-a7cf2ff1414c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.298356 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrb2p" event={"ID":"5ab27080-187a-43d2-b66e-a7cf2ff1414c","Type":"ContainerDied","Data":"f6d7123fa6e1d12fb54fe62b1466d1ffaa3c7e4d5ded9ed1bb70a7fe07033414"} Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.298492 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrb2p" Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.298742 4766 scope.go:117] "RemoveContainer" containerID="8f7105b0ab53084855ca778cd3b98c6eb999632450fa0b971735d45ac3896d8c" Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.324044 4766 scope.go:117] "RemoveContainer" containerID="4f0405699fee5b5acfb4bd12415082a23fff4ee0a4e74f18947cd163b36c55b1" Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.324340 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.331141 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrb2p"] Jan 29 12:17:35 crc kubenswrapper[4766]: I0129 12:17:35.342963 4766 scope.go:117] "RemoveContainer" containerID="b85edafad758c8a46a73e4f302c678f8cb920f2c91e0aaab8060640a5d0d651b" Jan 29 12:17:37 crc kubenswrapper[4766]: I0129 12:17:37.237390 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab27080-187a-43d2-b66e-a7cf2ff1414c" path="/var/lib/kubelet/pods/5ab27080-187a-43d2-b66e-a7cf2ff1414c/volumes"